Skip to content

Commit

Permalink
Merge branch 'main' into less_noisy_reconciler_logs
Browse files Browse the repository at this point in the history
  • Loading branch information
idegtiarenko committed Nov 6, 2023
2 parents 60b6880 + 76ab37b commit 97fc2e6
Show file tree
Hide file tree
Showing 172 changed files with 1,022 additions and 1,578 deletions.
1 change: 1 addition & 0 deletions .buildkite/pipelines/periodic-packaging.template.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ steps:
- ubuntu-2004
- ubuntu-2204
- rocky-8
- rocky-9
- rhel-7
- rhel-8
- rhel-9
Expand Down
1 change: 1 addition & 0 deletions .buildkite/pipelines/periodic-packaging.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ steps:
- ubuntu-2004
- ubuntu-2204
- rocky-8
- rocky-9
- rhel-7
- rhel-8
- rhel-9
Expand Down
1 change: 1 addition & 0 deletions .buildkite/pipelines/periodic-platform-support.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ steps:
- ubuntu-2004
- ubuntu-2204
- rocky-8
- rocky-9
- rhel-7
- rhel-8
- rhel-9
Expand Down
71 changes: 62 additions & 9 deletions .buildkite/pipelines/pull-request/packaging-tests-unix.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ config:
steps:
- group: packaging-tests-unix
steps:
- label: "{{matrix.image}} / {{matrix.PACKAGING_TASK}} / packaging-tests-unix"
key: "packaging-tests-unix"
command: ./.ci/scripts/packaging-test.sh $$PACKAGING_TASK
- label: "{{matrix.image}} / docker / packaging-tests-unix"
key: "packaging-tests-unix-docker"
command: ./.ci/scripts/packaging-test.sh destructiveDistroTest.docker
timeout_in_minutes: 300
matrix:
setup:
Expand All @@ -22,18 +22,71 @@ steps:
- ubuntu-2004
- ubuntu-2204
- rocky-8
- rocky-9
- rhel-7
- rhel-8
- rhel-9
- almalinux-8
agents:
provider: gcp
image: family/elasticsearch-{{matrix.image}}
diskSizeGb: 350
machineType: custom-16-32768
- label: "{{matrix.image}} / packages / packaging-tests-unix"
key: "packaging-tests-unix-packages"
command: ./.ci/scripts/packaging-test.sh destructiveDistroTest.packages
timeout_in_minutes: 300
matrix:
setup:
image:
- centos-7
- debian-10
- debian-11
- opensuse-leap-15
- oraclelinux-7
- oraclelinux-8
- sles-12
- sles-15
- ubuntu-1804
- ubuntu-2004
- ubuntu-2204
- rocky-8
- rocky-9
- rhel-7
- rhel-8
- rhel-9
- almalinux-8
agents:
provider: gcp
image: family/elasticsearch-{{matrix.image}}
diskSizeGb: 350
machineType: custom-16-32768
- label: "{{matrix.image}} / archives / packaging-tests-unix"
key: "packaging-tests-unix-archives"
command: ./.ci/scripts/packaging-test.sh destructiveDistroTest.archives
timeout_in_minutes: 300
matrix:
setup:
image:
- centos-7
- debian-10
- debian-11
- opensuse-leap-15
- oraclelinux-7
- oraclelinux-8
- sles-12
- sles-15
- ubuntu-1804
- ubuntu-2004
- ubuntu-2204
- rocky-8
- rocky-9
- rhel-7
- rhel-8
- rhel-9
- almalinux-8
PACKAGING_TASK:
- destructiveDistroTest.docker
- destructiveDistroTest.packages
- destructiveDistroTest.archives
agents:
provider: gcp
image: family/elasticsearch-{{matrix.image}}
diskSizeGb: 350
machineType: custom-16-32768
env:
PACKAGING_TASK: "{{matrix.PACKAGING_TASK}}"
3 changes: 2 additions & 1 deletion benchmarks/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ import org.elasticsearch.gradle.internal.info.BuildParams
* Side Public License, v 1.
*/

apply plugin: 'elasticsearch.java'
apply plugin: org.elasticsearch.gradle.internal.ElasticsearchJavaBasePlugin
apply plugin: 'java-library'
apply plugin: 'application'

application {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
import org.elasticsearch.cluster.routing.UnassignedInfo;
import org.elasticsearch.cluster.routing.allocation.AllocationService;
import org.elasticsearch.cluster.routing.allocation.DataTier;
import org.elasticsearch.cluster.routing.allocation.ShardsAvailabilityHealthIndicatorService;
import org.elasticsearch.cluster.routing.allocation.shards.ShardsAvailabilityHealthIndicatorService;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.Settings;
Expand Down
5 changes: 5 additions & 0 deletions docs/changelog/101700.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 101700
summary: Fix `lastUnsafeSegmentGenerationForGets` for realtime get
area: Engine
type: bug
issues: []
7 changes: 7 additions & 0 deletions docs/changelog/101778.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
pr: 101778
summary: Don't update system index mappings in mixed clusters
area: Infra/Core
type: bug
issues:
- 101331
- 99778
6 changes: 6 additions & 0 deletions docs/changelog/101788.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 101788
summary: "ESQL: Narrow catch in convert functions"
area: ES|QL
type: bug
issues:
- 100820
5 changes: 5 additions & 0 deletions docs/changelog/101802.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 101802
summary: Correctly logging watcher history write failures
area: Watcher
type: bug
issues: []
2 changes: 1 addition & 1 deletion docs/reference/esql/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ An overview of using the <<esql-rest>>, <<esql-kibana>>, and
The current limitations of {esql}.

<<esql-examples>>::
A few examples of what you can with {esql}.
A few examples of what you can do with {esql}.

include::esql-get-started.asciidoc[]

Expand Down
10 changes: 5 additions & 5 deletions docs/reference/esql/processing-commands/dissect.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@

**Syntax**

[source,txt]
[source,esql]
----
DISSECT input "pattern" [ append_separator="<separator>"]
DISSECT input "pattern" [ APPEND_SEPARATOR="<separator>"]
----

*Parameters*
Expand All @@ -16,9 +16,9 @@ The column that contains the string you want to structure. If the column has
multiple values, `DISSECT` will process each value.

`pattern`::
A dissect pattern.
A <<esql-dissect-patterns,dissect pattern>>.

`append_separator="<separator>"`::
`<separator>`::
A string used as the separator between appended values, when using the <<esql-append-modifier,append modifier>>.

*Description*
Expand All @@ -29,7 +29,7 @@ delimiter-based pattern, and extracts the specified keys as columns.

Refer to <<esql-process-data-with-dissect>> for the syntax of dissect patterns.

*Example*
*Examples*

// tag::examples[]
The following example parses a string that contains a timestamp, some text, and
Expand Down
18 changes: 17 additions & 1 deletion docs/reference/esql/processing-commands/drop.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,23 @@
[[esql-drop]]
=== `DROP`

Use `DROP` to remove columns:
**Syntax**

[source,esql]
----
DROP columns
----

*Parameters*

`columns`::
A comma-separated list of columns to remove. Supports wildcards.

*Description*

The `DROP` processing command removes one or more columns.

*Examples*

[source,esql]
----
Expand Down
10 changes: 5 additions & 5 deletions docs/reference/esql/processing-commands/enrich.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

**Syntax**

[source,txt]
[source,esql]
----
ENRICH policy [ON match_field] [WITH [new_name1 = ]field1, [new_name2 = ]field2, ...]
----
Expand All @@ -15,18 +15,18 @@ ENRICH policy [ON match_field] [WITH [new_name1 = ]field1, [new_name2 = ]field2,
The name of the enrich policy. You need to <<esql-set-up-enrich-policy,create>>
and <<esql-execute-enrich-policy,execute>> the enrich policy first.

`ON match_field`::
`match_field`::
The match field. `ENRICH` uses its value to look for records in the enrich
index. If not specified, the match will be performed on the column with the same
name as the `match_field` defined in the <<esql-enrich-policy,enrich policy>>.

`WITH fieldX`::
`fieldX`::
The enrich fields from the enrich index that are added to the result as new
columns. If a column with the same name as the enrich field already exists, the
existing column will be replaced by the new column. If not specified, each of
the enrich fields defined in the policy is added

`new_nameX =`::
`new_nameX`::
Enables you to change the name of the column that's added for each of the enrich
fields. Defaults to the enrich field name.

Expand Down Expand Up @@ -74,7 +74,7 @@ include::{esql-specs}/docs-IT_tests_only.csv-spec[tag=enrich_on-result]

By default, each of the enrich fields defined in the policy is added as a
column. To explicitly select the enrich fields that are added, use
`WITH <field1>, <field2>...`:
`WITH <field1>, <field2>, ...`:

[source.merge.styled,esql]
----
Expand Down
30 changes: 24 additions & 6 deletions docs/reference/esql/processing-commands/eval.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,30 @@
[discrete]
[[esql-eval]]
=== `EVAL`
`EVAL` enables you to append new columns:

**Syntax**

[source,esql]
----
EVAL column1 = value1[, ..., columnN = valueN]
----

*Parameters*

`columnX`::
The column name.

`valueX`::
The value for the column. Can be a literal, an expression, or a
<<esql-functions,function>>.

*Description*

The `EVAL` processing command enables you to append new columns with calculated
values. `EVAL` supports various functions for calculating values. Refer to
<<esql-functions,Functions>> for more information.

*Examples*

[source.merge.styled,esql]
----
Expand All @@ -23,8 +46,3 @@ include::{esql-specs}/docs.csv-spec[tag=evalReplace]
|===
include::{esql-specs}/docs.csv-spec[tag=evalReplace-result]
|===

[discrete]
==== Functions
`EVAL` supports various functions for calculating values. Refer to
<<esql-functions,Functions>> for more information.
2 changes: 1 addition & 1 deletion docs/reference/esql/processing-commands/grok.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

**Syntax**

[source,txt]
[source,esql]
----
GROK input "pattern"
----
Expand Down
24 changes: 19 additions & 5 deletions docs/reference/esql/processing-commands/keep.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,25 @@
[[esql-keep]]
=== `KEEP`

The `KEEP` command enables you to specify what columns are returned and the
order in which they are returned.
**Syntax**

To limit the columns that are returned, use a comma-separated list of column
names. The columns are returned in the specified order:
[source,esql]
----
KEEP columns
----

*Parameters*
`columns`::
A comma-separated list of columns to keep. Supports wildcards.

*Description*

The `KEEP` processing command enables you to specify what columns are returned
and the order in which they are returned.

*Examples*

The columns are returned in the specified order:

[source.merge.styled,esql]
----
Expand All @@ -27,7 +41,7 @@ include::{esql-specs}/docs.csv-spec[tag=keepWildcard]

The asterisk wildcard (`*`) by itself translates to all columns that do not
match the other arguments. This query will first return all columns with a name
that starts with an h, followed by all other columns:
that starts with `h`, followed by all other columns:

[source,esql]
----
Expand Down
26 changes: 22 additions & 4 deletions docs/reference/esql/processing-commands/limit.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,30 @@
[[esql-limit]]
=== `LIMIT`

The `LIMIT` processing command enables you to limit the number of rows:
**Syntax**

[source,esql]
----
include::{esql-specs}/docs.csv-spec[tag=limit]
LIMIT max_number_of_rows
----

If not specified, `LIMIT` defaults to `500`. A single query will not return
more than 10,000 rows, regardless of the `LIMIT` value.
*Parameters*

`max_number_of_rows`::
The maximum number of rows to return.

*Description*

The `LIMIT` processing command enables you to limit the number of rows that are
returned. If not specified, `LIMIT` defaults to `500`.

A query does not return more than 10,000 rows, regardless of the `LIMIT` value.
You can change this with the `esql.query.result_truncation_max_size` static
cluster setting.

*Example*

[source,esql]
----
include::{esql-specs}/docs.csv-spec[tag=limit]
----
Loading

0 comments on commit 97fc2e6

Please sign in to comment.