Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main' into dynamic_field_in_te…
Browse files Browse the repository at this point in the history
…mplate
  • Loading branch information
jimczi committed Apr 16, 2024
2 parents abd8cad + 82c5eb0 commit 8b05a5b
Show file tree
Hide file tree
Showing 271 changed files with 6,047 additions and 2,063 deletions.
6 changes: 6 additions & 0 deletions docs/changelog/107121.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 107121
summary: Add a flag to re-enable writes on the final index after an ILM shrink action.
area: ILM+SLM
type: enhancement
issues:
- 106599
5 changes: 5 additions & 0 deletions docs/changelog/107272.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 107272
summary: "ESQL: extend BUCKET with spans"
area: ES|QL
type: enhancement
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/107358.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 107358
summary: Check node shutdown before fail
area: Transform
type: enhancement
issues:
- 100891
5 changes: 5 additions & 0 deletions docs/changelog/107411.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 107411
summary: Invalidating cross cluster API keys requires `manage_security`
area: Security
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/107447.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 107447
summary: "Fix regression in get index settings (human=true) where the version was not displayed in human-readable format"
area: Infra/Core
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/107467.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 107467
summary: "[Connector API] Fix bug with filtering validation toXContent"
area: Application
type: bug
issues: []
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,17 @@ higher sampling rates, the relative error is still low.

NOTE: This represents the result of aggregations against a typical positively skewed APM data set which also has outliers in the upper tail. The linear dependence of the relative error on the sample size is found to hold widely, but the slope depends on the variation in the quantity being aggregated. As such, the variance in your own data may
cause relative error rates to increase or decrease at a different rate.
[[random-sampler-consistency]]
==== Random sampler consistency

For a given `probability` and `seed`, the random sampler aggregation is consistent when sampling unchanged data from the same shard.
However, this is background random sampling if a particular document is included in the sampled set or not is dependent on current number of segments.

Meaning, replica vs. primary shards could return different values as different particular documents are sampled.

If the shard changes in via doc addition, update, deletion, or segment merging, the particular documents sampled could change, and thus the resulting statistics could change.

The resulting statistics used from the random sampler aggregation are approximate and should be treated as such.

[[random-sampler-special-cases]]
==== Random sampling special cases
Expand All @@ -105,6 +116,6 @@ for a bucket is `10,000` with `probability: 0.1`, the actual number of documents

An exception to this is <<search-aggregations-metrics-cardinality-aggregation, cardinality aggregation>>. Unique item
counts are not suitable for automatic scaling. When interpreting the cardinality count, compare it
to the number of sampled docs provided in the top level `doc_count` within the random_sampler aggregation. It gives
to the number of sampled docs provided in the top level `doc_count` within the random_sampler aggregation. It gives
you an idea of unique values as a percentage of total values. It may not reflect, however, the exact number of unique values
for the given field.
3 changes: 2 additions & 1 deletion docs/reference/esql/esql-async-query-api.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@ POST /_query/async
| SORT year
| LIMIT 5
""",
"wait_for_completion_timeout": "2s"
"wait_for_completion_timeout": "2s",
"version": "2024.04.01"
}
----
// TEST[setup:library]
Expand Down
6 changes: 3 additions & 3 deletions docs/reference/esql/esql-get-started.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -248,22 +248,22 @@ For example, to create hourly buckets for the data on October 23rd:

[source,esql]
----
include::{esql-specs}/date.csv-spec[tag=gs-bucket]
include::{esql-specs}/bucket.csv-spec[tag=gs-bucket]
----

Combine `BUCKET` with <<esql-stats-by>> to create a histogram. For example,
to count the number of events per hour:

[source,esql]
----
include::{esql-specs}/date.csv-spec[tag=gs-bucket-stats-by]
include::{esql-specs}/bucket.csv-spec[tag=gs-bucket-stats-by]
----

Or the median duration per hour:

[source,esql]
----
include::{esql-specs}/date.csv-spec[tag=gs-bucket-stats-by-median]
include::{esql-specs}/bucket.csv-spec[tag=gs-bucket-stats-by-median]
----

[discrete]
Expand Down
9 changes: 7 additions & 2 deletions docs/reference/esql/esql-query-api.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ POST /_query
| STATS MAX(page_count) BY year
| SORT year
| LIMIT 5
"""
""",
"version": "2024.04.01"
}
----
// TEST[setup:library]
Expand Down Expand Up @@ -76,7 +77,11 @@ For syntax, refer to <<esql-locale-param>>.
<<esql-rest-params>>.

`query`::
(Required, object) {esql} query to run. For syntax, refer to <<esql-syntax>>.
(Required, string) {esql} query to run. For syntax, refer to <<esql-syntax>>.

`version`::
(Required, string) {esql} language version. Can be sent in short or long form, e.g.
`2024.04.01` or `2024.04.01.🚀`. See <<esql-version>> for details.

[discrete]
[role="child_attributes"]
Expand Down
24 changes: 16 additions & 8 deletions docs/reference/esql/esql-rest.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ The <<esql-query-api,{esql} query API>> accepts an {esql} query string in the
----
POST /_query?format=txt
{
"query": "FROM library | KEEP author, name, page_count, release_date | SORT page_count DESC | LIMIT 5"
"query": "FROM library | KEEP author, name, page_count, release_date | SORT page_count DESC | LIMIT 5",
"version": "2024.04.01"
}
----
// TEST[setup:library]
Expand Down Expand Up @@ -55,7 +56,8 @@ POST /_query?format=txt
| KEEP author, name, page_count, release_date
| SORT page_count DESC
| LIMIT 5
"""
""",
"version": "2024.04.01"
}
----
// TEST[setup:library]
Expand Down Expand Up @@ -143,7 +145,8 @@ POST /_query?format=txt
"lte": 200
}
}
}
},
"version": "2024.04.01"
}
----
// TEST[setup:library]
Expand Down Expand Up @@ -179,7 +182,8 @@ POST /_query?format=json
| SORT page_count DESC
| LIMIT 5
""",
"columnar": true
"columnar": true,
"version": "2024.04.01"
}
----
// TEST[setup:library]
Expand Down Expand Up @@ -226,7 +230,8 @@ POST /_query
| EVAL birth_date = date_parse(birth_date_string)
| EVAL month_of_birth = DATE_FORMAT("MMMM",birth_date)
| LIMIT 5
"""
""",
"version": "2024.04.01"
}
----
// TEST[setup:library]
Expand All @@ -249,7 +254,8 @@ POST /_query
| STATS count = COUNT(*) by year
| WHERE count > 0
| LIMIT 5
"""
""",
"version": "2024.04.01"
}
----
// TEST[setup:library]
Expand All @@ -270,7 +276,8 @@ POST /_query
| WHERE count > ?
| LIMIT 5
""",
"params": [300, "Frank Herbert", 0]
"params": [300, "Frank Herbert", 0],
"version": "2024.04.01"
}
----
// TEST[setup:library]
Expand Down Expand Up @@ -304,7 +311,8 @@ POST /_query/async
| SORT year
| LIMIT 5
""",
"wait_for_completion_timeout": "2s"
"wait_for_completion_timeout": "2s",
"version": "2024.04.01"
}
----
// TEST[setup:library]
Expand Down
4 changes: 4 additions & 0 deletions docs/reference/esql/esql-using.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,12 @@ Using {esql} to query across multiple clusters.
<<esql-task-management>>::
Using the <<tasks,task management API>> to list and cancel {esql} queries.

<<esql-version>>::
Information about {esql} language versions.

include::esql-rest.asciidoc[]
include::esql-kibana.asciidoc[]
include::esql-security-solution.asciidoc[]
include::esql-across-clusters.asciidoc[]
include::task-management.asciidoc[]
include::esql-version.asciidoc[]
49 changes: 49 additions & 0 deletions docs/reference/esql/esql-version.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
[[esql-version]]
=== {esql} language versions

++++
<titleabbrev>Language versions</titleabbrev>
++++

[discrete]
[[esql-versions-released]]
==== Released versions

* Version `2024.04.01`

[discrete]
[[esql-versions-explanation]]
=== How versions work

{esql} language versions are independent of {es} versions.
Versioning the language ensures that your queries will always
remain valid, independent of new {es} and {esql} releases. And it lets us
evolve ESQL as we learn more from people using it. We don't plan to make
huge changes to it, but we know we've made mistakes and we don't want those
to live forever.

For instance, the following query will remain valid, even if a future
version of {esql} introduces syntax changes or changes how the used
commands or functions work.

[source,console]
----
POST /_query?format=txt
{
"version": "2024.04.01",
"query": """
FROM library
| EVAL release_month = DATE_TRUNC(1 month, release_date)
| KEEP release_month
| SORT release_month ASC
| LIMIT 3
"""
}
----
// TEST[setup:library]

We won't make breaking changes to released {esql} versions and
versions will remain supported until they are deprecated.
New features, bug fixes, and performance improvements
will be continue to be added to released {esql} versions,
provided they do not involve breaking changes.
22 changes: 11 additions & 11 deletions docs/reference/esql/functions/bucket.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,11 @@ in monthly buckets:

[source.merge.styled,esql]
----
include::{esql-specs}/date.csv-spec[tag=docsBucketMonth]
include::{esql-specs}/bucket.csv-spec[tag=docsBucketMonth]
----
[%header.monospaced.styled,format=dsv,separator=|]
|===
include::{esql-specs}/date.csv-spec[tag=docsBucketMonth-result]
include::{esql-specs}/bucket.csv-spec[tag=docsBucketMonth-result]
|===

The goal isn't to provide *exactly* the target number of buckets, it's to pick a
Expand All @@ -51,11 +51,11 @@ Combine `BUCKET` with

[source.merge.styled,esql]
----
include::{esql-specs}/date.csv-spec[tag=docsBucketMonthlyHistogram]
include::{esql-specs}/bucket.csv-spec[tag=docsBucketMonthlyHistogram]
----
[%header.monospaced.styled,format=dsv,separator=|]
|===
include::{esql-specs}/date.csv-spec[tag=docsBucketMonthlyHistogram-result]
include::{esql-specs}/bucket.csv-spec[tag=docsBucketMonthlyHistogram-result]
|===

NOTE: `BUCKET` does not create buckets that don't match any documents.
Expand All @@ -66,11 +66,11 @@ at most 100 buckets in a year results in weekly buckets:

[source.merge.styled,esql]
----
include::{esql-specs}/date.csv-spec[tag=docsBucketWeeklyHistogram]
include::{esql-specs}/bucket.csv-spec[tag=docsBucketWeeklyHistogram]
----
[%header.monospaced.styled,format=dsv,separator=|]
|===
include::{esql-specs}/date.csv-spec[tag=docsBucketWeeklyHistogram-result]
include::{esql-specs}/bucket.csv-spec[tag=docsBucketWeeklyHistogram-result]
|===

NOTE: `BUCKET` does not filter any rows. It only uses the provided range to
Expand All @@ -83,11 +83,11 @@ salary histogram:

[source.merge.styled,esql]
----
include::{esql-specs}/ints.csv-spec[tag=docsBucketNumeric]
include::{esql-specs}/bucket.csv-spec[tag=docsBucketNumeric]
----
[%header.monospaced.styled,format=dsv,separator=|]
|===
include::{esql-specs}/ints.csv-spec[tag=docsBucketNumeric-result]
include::{esql-specs}/bucket.csv-spec[tag=docsBucketNumeric-result]
|===

Unlike the earlier example that intentionally filters on a date range, you
Expand All @@ -102,17 +102,17 @@ per hour:

[source.styled,esql]
----
include::{esql-specs}/date.csv-spec[tag=docsBucketLast24hr]
include::{esql-specs}/bucket.csv-spec[tag=docsBucketLast24hr]
----

Create monthly buckets for the year 1985, and calculate the average salary by
hiring month:

[source.merge.styled,esql]
----
include::{esql-specs}/date.csv-spec[tag=bucket_in_agg]
include::{esql-specs}/bucket.csv-spec[tag=bucket_in_agg]
----
[%header.monospaced.styled,format=dsv,separator=|]
|===
include::{esql-specs}/date.csv-spec[tag=bucket_in_agg-result]
include::{esql-specs}/bucket.csv-spec[tag=bucket_in_agg-result]
|===
2 changes: 1 addition & 1 deletion docs/reference/esql/functions/description/bucket.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

*Description*

Creates human-friendly buckets and returns a datetime value for each row that corresponds to the resulting bucket the row falls into.
Creates groups of values - buckets - out of a datetime or numeric input. The size of the buckets can either be provided directly, or chosen based on a recommended count and values range.
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
// This is generated by ESQL's AbstractFunctionTestCase. Do no edit it. See ../README.md for how to regenerate it.

*Description*

Decode a base64 string.
5 changes: 5 additions & 0 deletions docs/reference/esql/functions/description/to_base64.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
// This is generated by ESQL's AbstractFunctionTestCase. Do no edit it. See ../README.md for how to regenerate it.

*Description*

Encode a string to a base64 string.
13 changes: 13 additions & 0 deletions docs/reference/esql/functions/examples/from_base64.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
// This is generated by ESQL's AbstractFunctionTestCase. Do no edit it. See ../README.md for how to regenerate it.

*Example*

[source.merge.styled,esql]
----
include::{esql-specs}/string.csv-spec[tag=from_base64]
----
[%header.monospaced.styled,format=dsv,separator=|]
|===
include::{esql-specs}/string.csv-spec[tag=from_base64-result]
|===

13 changes: 13 additions & 0 deletions docs/reference/esql/functions/examples/to_base64.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
// This is generated by ESQL's AbstractFunctionTestCase. Do no edit it. See ../README.md for how to regenerate it.

*Example*

[source.merge.styled,esql]
----
include::{esql-specs}/string.csv-spec[tag=to_base64]
----
[%header.monospaced.styled,format=dsv,separator=|]
|===
include::{esql-specs}/string.csv-spec[tag=to_base64-result]
|===

Loading

0 comments on commit 8b05a5b

Please sign in to comment.