Skip to content

Commit

Permalink
Replaced bit.ly links and updated elasticsearch.org to elastic.co.
Browse files Browse the repository at this point in the history
  • Loading branch information
debadair committed Dec 1, 2015
1 parent 3f0fb14 commit 8a741b9
Show file tree
Hide file tree
Showing 63 changed files with 129 additions and 135 deletions.
8 changes: 4 additions & 4 deletions 010_Intro/10_Installing_ES.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,22 +8,22 @@ Preferably, you should install the latest version of the((("Java", "installing")
from http://www.java.com[_www.java.com_].

You can download the latest version of Elasticsearch from
http://www.elasticsearch.org/download/[_elasticsearch.org/download_].
https://www.elastic.co/downloads/elasticsearch[_elasticsearch.co/downloads/elasticsearch_].

[source,sh]
--------------------------------------------------
curl -L -O http://download.elasticsearch.org/PATH/TO/VERSION.zip <1>
curl -L -O http://download.elastic.co/PATH/TO/VERSION.zip <1>
unzip elasticsearch-$VERSION.zip
cd elasticsearch-$VERSION
--------------------------------------------------
<1> Fill in the URL for the latest version available on
http://www.elasticsearch.org/download/[_elasticsearch.org/download_].
http://www.elastic.co/downloads/elasticsearch[_elastic.co/downloads/elasticsearch_].

[TIP]
====
When installing Elasticsearch in production, you can use the method
described previously, or the Debian or RPM packages provided on the
http://www.elasticsearch.org/downloads[downloads page]. You can also use
http://www.elastic.co/downloads/elasticsearch[downloads page]. You can also use
the officially supported
https://github.com/elasticsearch/puppet-elasticsearch[Puppet module] or
https://github.com/elasticsearch/cookbook-elasticsearch[Chef cookbook].
Expand Down
7 changes: 3 additions & 4 deletions 010_Intro/15_API.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,7 @@ The Java client must be from the same _major_ version of Elasticsearch as the no
otherwise, they may not be able to understand each other.
====

More information about the Java clients can be found in the Java API section
of the http://www.elasticsearch.org/guide/[Guide].
More information about the Java clients can be found in https://www.elastic.co/guide/en/elasticsearch/client/index.html[Elasticsearch Clients].

==== RESTful API with JSON over HTTP

Expand All @@ -41,8 +40,8 @@ seen, you can even talk to Elasticsearch from the command line by using the

NOTE: Elasticsearch provides official clients((("clients", "other than Java"))) for several languages--Groovy,
JavaScript, .NET, PHP, Perl, Python, and Ruby--and there are numerous
community-provided clients and integrations, all of which can be found in the
http://www.elasticsearch.org/guide/[Guide].
community-provided clients and integrations, all of which can be found in
https://www.elastic.co/guide/en/elasticsearch/client/index.html[Elasticsearch Clients].

A request to Elasticsearch consists of the same parts as any HTTP request:((("HTTP requests")))((("requests to Elasticsearch")))

Expand Down
4 changes: 2 additions & 2 deletions 010_Intro/20_Document.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,8 @@ a flat table structure.
====
Almost all languages have modules that will convert arbitrary data
structures or objects((("JSON", "converting your data to"))) into JSON for you, but the details are specific to each
language. Look for modules that handle JSON _serialization_ or _marshalling_. http://www.elasticsearch.org/guide[The official
Elasticsearch clients] all handle conversion to and from JSON for you
language. Look for modules that handle JSON _serialization_ or _marshalling_. The official
https://www.elastic.co/guide/en/elasticsearch/client/index.html[Elasticsearch Clients] all handle conversion to and from JSON for you
automatically.
====

2 changes: 1 addition & 1 deletion 010_Intro/30_Tutorial_Search.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -445,5 +445,5 @@ HTML tags:
<1> The highlighted fragment from the original text

You can read more about the highlighting of search snippets in the
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-highlighting.html[highlighting reference documentation].
{ref}/search-request-highlighting.html[highlighting reference documentation].

2 changes: 1 addition & 1 deletion 030_Data/45_Partial_update.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ for example your Elasticsearch endpoints are only exposed and available to trust
then you can choose to re-enable the dynamic scripting if it is a feature your application needs.
You can read more about scripting in the
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html[scripting reference documentation].
{ref}/modules-scripting.html[scripting reference documentation].
****

Expand Down
2 changes: 1 addition & 1 deletion 050_Search/20_Query_string.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ readable result:

As you can see from the preceding examples, this _lite_ query-string search is
surprisingly powerful.((("query strings", "syntax, reference for"))) Its query syntax, which is explained in detail in the
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-string-syntax[Query String Syntax]
{ref}/query-dsl-query-string-query.html#query-string-syntax[Query String Syntax]
reference docs, allows us to express quite complex queries succinctly. This
makes it great for throwaway queries from the command line or during
development.
Expand Down
4 changes: 2 additions & 2 deletions 052_Mapping_Analysis/40_Analysis.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ lowercase. It would produce

Language analyzers::

Language-specific analyzers ((("language analyzers")))are available for http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-lang-analyzer.html[many languages]. They are able to
Language-specific analyzers ((("language analyzers")))are available for {ref}/analysis-lang-analyzer.html[many languages]. They are able to
take the peculiarities of the specified language into account. For instance,
the `english` analyzer comes with a set of English ((("stopwords")))stopwords (common words
like `and` or `the` that don't have much impact on relevance), which it
Expand Down Expand Up @@ -202,7 +202,7 @@ that the original word occupied in the original string.

TIP: The `type` values like `<ALPHANUM>` vary ((("types", "type values returned by analyzers")))per analyzer and can be ignored.
The only place that they are used in Elasticsearch is in the
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/analysis-intro.html#analyze-api[`keep_types` token filter].
{ref}/analysis-keep-types-tokenfilter.html[`keep_types` token filter].

The `analyze` API is a useful tool for understanding what is happening
inside Elasticsearch indices, and we will talk more about it as we progress.
Expand Down
2 changes: 1 addition & 1 deletion 060_Distributed_Search/15_Search_options.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The `preference` parameter allows((("preference parameter")))((("search options"
used to handle the search request. It accepts values such as `_primary`,
`_primary_first`, `_local`, `_only_node:xyz`, `_prefer_node:xyz`, and
`_shards:2,3`, which are explained in detail on the
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-preference.html[search `preference`]
{ref}/search-request-preference.html[search `preference`]
documentation page.

However, the most generally useful value is some arbitrary string, to avoid
Expand Down
2 changes: 1 addition & 1 deletion 070_Index_Mgmt/10_Settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

There are many many knobs((("index settings"))) that you can twiddle to
customize index behavior, which you can read about in the
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_index_settings.html#_index_settings[Index Modules reference documentation],
{ref}/index-modules.html[Index Modules reference documentation],
but...

TIP: Elasticsearch comes with good defaults. Don't twiddle these knobs until
Expand Down
20 changes: 10 additions & 10 deletions 070_Index_Mgmt/20_Custom_Analyzers.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Character filters::
Character filters((("character filters"))) are used to ``tidy up'' a string before it is tokenized.
For instance, if our text is in HTML format, it will contain HTML tags like
`<p>` or `<div>` that we don't want to be indexed. We can use the
http://bit.ly/1B6f4Ay[`html_strip` character filter]
{ref}/analysis-htmlstrip-charfilter.html[`html_strip` character filter]
to remove all HTML tags and to convert HTML entities like `&Aacute;` into the
corresponding Unicode character `Á`.

Expand All @@ -27,17 +27,17 @@ Tokenizers::
--
An analyzer _must_ have a single tokenizer.((("tokenizers", "in analyzers"))) The tokenizer breaks up the
string into individual terms or tokens. The
http://bit.ly/1E3Fd1b[`standard` tokenizer],
{ref}/analysis-standard-tokenizer.html[`standard` tokenizer],
which is used((("standard tokenizer"))) in the `standard` analyzer, breaks up a string into
individual terms on word boundaries, and removes most punctuation, but
other tokenizers exist that have different behavior.

For instance, the
http://bit.ly/1ICd585[`keyword` tokenizer]
{ref}/analysis-keyword-tokenizer.html[`keyword` tokenizer]
outputs exactly((("keyword tokenizer"))) the same string as it received, without any tokenization. The
http://bit.ly/1xt3t7d[`whitespace` tokenizer]
{ref}/analysis-whitespace-tokenizer.html[`whitespace` tokenizer]
splits text((("whitespace tokenizer"))) on whitespace only. The
http://bit.ly/1ICdozA[`pattern` tokenizer] can
{ref}/analysis-pattern-tokenizer.html[`pattern` tokenizer] can
be used to split text on a ((("pattern tokenizer")))matching regular expression.
--

Expand All @@ -49,14 +49,14 @@ specified token filters,((("token filters"))) in the order in which they are spe

Token filters may change, add, or remove tokens. We have already mentioned the
http://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-lowercase-tokenizer.html[`lowercase`] and
http://bit.ly/1INX4tN[`stop` token filters],
{ref}/analysis-stop-tokenfilter.html[`stop` token filters],
but there are many more available in Elasticsearch.
http://bit.ly/1AUfpDN[Stemming token filters]
{ref}/analysis-stemmer-tokenfilter.html[Stemming token filters]
``stem'' words to ((("stemming token filters")))their root form. The
http://bit.ly/1ylU7Q7[`ascii_folding` filter]
{ref}/analysis-asciifolding-tokenfilter.html[`ascii_folding` filter]
removes diacritics,((("ascii_folding filter"))) converting a term like `"très"` into `"tres"`. The
http://bit.ly/1CbkmYe[`ngram`] and
http://bit.ly/1DIf6j5[`edge_ngram` token filters] can produce((("edge_engram token filter")))((("ngram and edge_ngram token filters")))
{ref}/analysis-ngram-tokenfilter.html[`ngram`] and
{ref}/analysis-edgengram-tokenfilter.html[`edge_ngram` token filters] can produce((("edge_engram token filter")))((("ngram and edge_ngram token filters")))
tokens suitable for partial matching or autocomplete.
--

Expand Down
4 changes: 2 additions & 2 deletions 070_Index_Mgmt/40_Custom_Dynamic_Mapping.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ a `date` field, you have to add it manually.
[NOTE]
====
Elasticsearch's idea of which strings look like dates can be altered
with the http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-root-object-type.html[`dynamic_date_formats` setting].
with the {ref}/dynamic-field-mapping.html#date-detection[`dynamic_date_formats` setting].
====

[[dynamic-templates]]
Expand Down Expand Up @@ -140,4 +140,4 @@ The `unmatch` and `path_unmatch` patterns((("unmatch pattern")))((("path_unmap p
that would otherwise match.

More configuration options can be found in the
http://bit.ly/1wdHOzG[reference documentation for the root object].
{ref}/dynamic-mapping.html[dynamic mapping documentation].
4 changes: 2 additions & 2 deletions 075_Inside_a_shard/50_Persistent_changes.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -83,10 +83,10 @@ image::images/elas_1109.png["After a flush, the segments are fully commited and
The action of performing a commit and truncating the translog is known in
Elasticsearch as a _flush_. ((("flushes"))) Shards are flushed automatically every 30
minutes, or when the translog becomes too big. See the
http://bit.ly/1E3HKbD[`translog` documentation] for settings
{ref}/index-modules-translog.html#_translog_settings[`translog` documentation] for settings
that can be used((("translog (transaction log)", "flushes and"))) to control these thresholds:

The http://bit.ly/1ICgxiU[`flush` API] can ((("indices", "flushing")))((("flush API")))be used to perform a manual flush:
The {ref}/indices-flush.html[`flush` API] can ((("indices", "flushing")))((("flush API")))be used to perform a manual flush:

[source,json]
-----------------------------
Expand Down
2 changes: 1 addition & 1 deletion 080_Structured_Search/25_ranges.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ math expression:

Date math is _calendar aware_, so it knows the number of days in each month,
days in a year, and so forth. More details about working with dates can be found in
the http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-date-format.html[date format reference documentation].
the {ref}/mapping-date-format.html[date format reference documentation].

==== Ranges on Strings

Expand Down
2 changes: 1 addition & 1 deletion 080_Structured_Search/40_bitsets.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ doesn't make sense to do so:

Script filters::

The results((("script filters, no caching of results"))) from http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/filter-caching.html#_controlling_caching[`script` filters] cannot
The results((("script filters, no caching of results"))) from {ref}/query-dsl-script-query.html cannot
be cached because the meaning of the script is opaque to Elasticsearch.

Geo-filters::
Expand Down
2 changes: 1 addition & 1 deletion 100_Full_Text_Search/10_Multi_word_queries.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ must match for a document to be considered a match.
The `minimum_should_match` parameter is flexible, and different rules can
be applied depending on the number of terms the user enters. For the full
documentation see the
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-minimum-should-match.html#query-dsl-minimum-should-match
{ref}/query-dsl-minimum-should-match.html#query-dsl-minimum-should-match
====

To fully understand how the `match` query handles multiword queries, we need
Expand Down
2 changes: 1 addition & 1 deletion 100_Full_Text_Search/30_Controlling_analysis.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,6 @@ setting instead.
A common work flow for time based data like logging is to create a new index
per day on the fly by just indexing into it. While this work flow prevents
you from creating your index up front, you can still use
http://bit.ly/1ygczeq[index templates]
{ref}/indices-templates.html[index templates]
to specify the settings and mappings that a new index should have.
====
2 changes: 1 addition & 1 deletion 130_Partial_Matching/35_Search_as_you_type.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ fast. However, sometimes it is not fast enough. Latency matters, especially
when you are trying to provide instant feedback. Sometimes the fastest way of
searching is not to search at all.
The http://bit.ly/1IChV5j[completion suggester] in
The {ref}/search-suggesters-completion.html[completion suggester] in
Elasticsearch((("completion suggester"))) takes a completely different approach. You feed it a list
of all possible completions, and it builds them into a _finite state
transducer_, an((("Finite State Transducer"))) optimized data structure that resembles a big graph. To
Expand Down
2 changes: 1 addition & 1 deletion 130_Partial_Matching/40_Compound_words.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ see ``Aussprachewörtebuch'' in the results list. Similarly, a search for
``Adler'' (eagle) should include ``Weißkopfseeadler.''

One approach to indexing languages like this is to break compound words into
their constituent parts using the http://bit.ly/1ygdjjC[compound word token filter].
their constituent parts using the {ref}/analysis-compound-word-tokenfilter.html[compound word token filter].
However, the quality of the results depends on how good your compound-word
dictionary is.

Expand Down
2 changes: 1 addition & 1 deletion 170_Relevance/30_Not_quite_not.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ too strict.
[[boosting-query]]
==== boosting Query

The http://bit.ly/1IO281f[`boosting` query] solves((("boosting query")))((("relevance", "controlling", "boosting query"))) this problem.
The {ref}/query-dsl-boosting-query.html[`boosting` query] solves((("boosting query")))((("relevance", "controlling", "boosting query"))) this problem.
It allows us to still include results that appear to be about the fruit or
the pastries, but to downgrade them--to rank them lower than they would
otherwise be:
Expand Down
2 changes: 1 addition & 1 deletion 170_Relevance/35_Ignoring_TFIDF.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ isn't, `0`.
[[constant-score-query]]
==== constant_score Query

Enter the http://bit.ly/1DIgSAK[`constant_score`] query.
Enter the {ref}/query-dsl-constant-score-query.html[`constant_score`] query.
This ((("constant_score query")))query can wrap either a query or a filter, and assigns a score of
`1` to any documents that match, regardless of TF/IDF:

Expand Down
2 changes: 1 addition & 1 deletion 170_Relevance/40_Function_score_query.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[function-score-query]]
=== function_score Query

The http://bit.ly/1sCKtHW[`function_score` query] is the
The {ref}/query-dsl-function-score-query.html[`function_score` query] is the
ultimate tool for taking control of the scoring process.((("function_score query")))((("relevance", "controlling", "function_score query"))) It allows you to
apply a function to each document that matches the main query in order to
alter or completely replace the original query `_score`.
Expand Down
2 changes: 1 addition & 1 deletion 170_Relevance/45_Popularity.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ GET /blogposts/post/_search
The available modifiers are `none` (the default), `log`, `log1p`, `log2p`,
`ln`, `ln1p`, `ln2p`, `square`, `sqrt`, and `reciprocal`. You can read more
about them in the
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-function-score-query.html#_field_value_factor[`field_value_factor` documentation].
{ref}/query-dsl-function-score-query.html#_field_value_factor[`field_value_factor` documentation].

==== factor

Expand Down
4 changes: 2 additions & 2 deletions 170_Relevance/65_Script_score.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ a profit.
The `script_score` function provides enormous flexibility.((("scripts", "performance and"))) Within a script,
you have access to the fields of the document, to the current `_score`, and
even to the term frequencies, inverse document frequencies, and field length
norms (see http://bit.ly/1E3Rbbh[Text scoring in scripts]).
norms (see {ref}/modules-advanced-scripting.html[Text scoring in scripts]).
That said, scripts can have a performance impact. If you do find that your
scripts are not quite fast enough, you have three options:
Expand All @@ -115,7 +115,7 @@ scripts are not quite fast enough, you have three options:
document.
* Groovy is fast, but not quite as fast as Java.((("Java", "scripting in"))) You could reimplement your
script as a native Java script. (See
http://bit.ly/1ynBidJ[Native Java Scripts]).
{ref}/modules-scripting.html#native-java-scripts[Native Java Scripts]).
* Use the `rescore` functionality((("rescoring"))) described in <<rescore-api>> to apply
your script to only the best-scoring documents.
Expand Down
2 changes: 1 addition & 1 deletion 170_Relevance/70_Pluggable_similarities.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Before we move on from relevance and scoring, we will finish this chapter with
a more advanced subject: pluggable similarity algorithms.((("similarity algorithms", "pluggable")))((("relevance", "controlling", "using pluggable similarity algorithms"))) While Elasticsearch
uses the <<practical-scoring-function>> as its default similarity algorithm,
it supports other algorithms out of the box, which are listed
in the http://bit.ly/14Eiw7f[Similarity Modules] documentation.
in the {ref}/index-modules-similarity.html#configuration[Similarity Modules] documentation.

[[bm25]]
==== Okapi BM25
Expand Down
8 changes: 4 additions & 4 deletions 200_Language_intro/30_Language_pitfalls.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ It is not sufficient just to think about your documents, though.((("queries", "m
to think about how your users will query those documents. Often you will be able
to identify the main language of the user either from the language of that user's chosen
interface (for example, `mysite.de` versus `mysite.fr`) or from the
http://bit.ly/1BwEl61[`accept-language`]
http://www.w3.org/International/questions/qa-lang-priorities.en.php[`accept-language`]
HTTP header from the user's browser.

User searches also come in three main varieties:
Expand Down Expand Up @@ -97,10 +97,10 @@ cases, you need to use a heuristic to identify the predominant language.
Fortunately, libraries are available in several languages to help with this problem.

Of particular note is the
http://bit.ly/1AUr3i2[chromium-compact-language-detector]
https://github.com/mikemccand/chromium-compact-language-detector[chromium-compact-language-detector]
library from
http://bit.ly/1AUr85k[Mike McCandless],
which uses the open source (http://bit.ly/1u9KKgI[Apache License 2.0])
http://blog.mikemccandless.com/2013/08/a-new-version-of-compact-language.html[Mike McCandless],
which uses the open source (http://www.apache.org/licenses/LICENSE-2.0[Apache License 2.0])
https://code.google.com/p/cld2/[Compact Language Detector] (CLD) from Google. It is
small, fast, ((("Compact Language Detector (CLD)")))and accurate, and can detect 160+ languages from as little as two
sentences. It can even detect multiple languages within a single block of
Expand Down
4 changes: 2 additions & 2 deletions 200_Language_intro/50_One_language_per_field.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ Like the _index-per-language_ approach, the _field-per-language_ approach
maintains clean term frequencies. It is not quite as flexible as having
separate indices. Although it is easy to add a new field by using the <<updating-a-mapping,`update-mapping` API>>, those new fields may require new
custom analyzers, which can only be set up at index creation time. As a
workaround, you can http://bit.ly/1B6s0WY[close] the index, add the new
analyzers with the http://bit.ly/1zijFPx[`update-settings` API],
workaround, you can {ref}/indices-open-close.html[close] the index, add the new
analyzers with the {ref}/indices-update-settings.html[`update-settings` API],
then reopen the index, but closing the index means that it will require some
downtime.

Expand Down
Loading

0 comments on commit 8a741b9

Please sign in to comment.