Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into enrich_strict_range_t…
Browse files Browse the repository at this point in the history
…ypes
  • Loading branch information
craigtaverner committed Nov 11, 2024
2 parents 678a302 + a21d375 commit f423537
Show file tree
Hide file tree
Showing 168 changed files with 5,696 additions and 1,964 deletions.
1 change: 1 addition & 0 deletions .ci/dockerOnLinuxExclusions
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ sles-15.2
sles-15.3
sles-15.4
sles-15.5
sles-15.6

# These OSes are deprecated and filtered starting with 8.0.0, but need to be excluded
# for PR checks
Expand Down
6 changes: 6 additions & 0 deletions docs/changelog/114964.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 114964
summary: Add a `monitor_stats` privilege and allow that privilege for remote cluster
privileges
area: Authorization
type: enhancement
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/115744.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 115744
summary: Use `SearchStats` instead of field.isAggregatable in data node planning
area: ES|QL
type: bug
issues:
- 115737
5 changes: 5 additions & 0 deletions docs/changelog/116325.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 116325
summary: Adjust analyze limit exception to be a `bad_request`
area: Analysis
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/116382.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 116382
summary: Validate missing shards after the coordinator rewrite
area: Search
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/116447.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 116447
summary: Adding a deprecation info API warning for data streams with old indices
area: Data streams
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/116478.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 116478
summary: Semantic text simple partial update
area: Search
type: bug
issues: []
Original file line number Diff line number Diff line change
Expand Up @@ -127,10 +127,11 @@ And the following may be the response:

==== Percentiles_bucket implementation

The Percentile Bucket returns the nearest input data point that is not greater than the requested percentile; it does not
interpolate between data points.

The percentiles are calculated exactly and is not an approximation (unlike the Percentiles Metric). This means
the implementation maintains an in-memory, sorted list of your data to compute the percentiles, before discarding the
data. You may run into memory pressure issues if you attempt to calculate percentiles over many millions of
data-points in a single `percentiles_bucket`.

The Percentile Bucket returns the nearest input data point to the requested percentile, rounding indices toward
positive infinity; it does not interpolate between data points. For example, if there are eight data points and
you request the `50%th` percentile, it will return the `4th` item because `ROUND_UP(.50 * (8-1))` is `4`.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion docs/reference/esql/functions/kibana/docs/repeat.md

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

16 changes: 7 additions & 9 deletions docs/reference/how-to/knn-search.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -72,15 +72,13 @@ least enough RAM to hold the vector data and index structures. To check the
size of the vector data, you can use the <<indices-disk-usage>> API.

Here are estimates for different element types and quantization levels:
+
--
`element_type: float`: `num_vectors * num_dimensions * 4`
`element_type: float` with `quantization: int8`: `num_vectors * (num_dimensions + 4)`
`element_type: float` with `quantization: int4`: `num_vectors * (num_dimensions/2 + 4)`
`element_type: float` with `quantization: bbq`: `num_vectors * (num_dimensions/8 + 12)`
`element_type: byte`: `num_vectors * num_dimensions`
`element_type: bit`: `num_vectors * (num_dimensions/8)`
--

* `element_type: float`: `num_vectors * num_dimensions * 4`
* `element_type: float` with `quantization: int8`: `num_vectors * (num_dimensions + 4)`
* `element_type: float` with `quantization: int4`: `num_vectors * (num_dimensions/2 + 4)`
* `element_type: float` with `quantization: bbq`: `num_vectors * (num_dimensions/8 + 12)`
* `element_type: byte`: `num_vectors * num_dimensions`
* `element_type: bit`: `num_vectors * (num_dimensions/8)`

If utilizing HNSW, the graph must also be in memory, to estimate the required bytes use `num_vectors * 4 * HNSW.m`. The
default value for `HNSW.m` is 16, so by default `num_vectors * 4 * 16`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -327,7 +327,7 @@ The result would then have the `errors` field set to `true` and hold the error f
"details": {
"my_admin_role": { <4>
"type": "action_request_validation_exception",
"reason": "Validation Failed: 1: unknown cluster privilege [bad_cluster_privilege]. a privilege must be either one of the predefined cluster privilege names [manage_own_api_key,manage_data_stream_global_retention,monitor_data_stream_global_retention,none,cancel_task,cross_cluster_replication,cross_cluster_search,delegate_pki,grant_api_key,manage_autoscaling,manage_index_templates,manage_logstash_pipelines,manage_oidc,manage_saml,manage_search_application,manage_search_query_rules,manage_search_synonyms,manage_service_account,manage_token,manage_user_profile,monitor_connector,monitor_enrich,monitor_inference,monitor_ml,monitor_rollup,monitor_snapshot,monitor_text_structure,monitor_watcher,post_behavioral_analytics_event,read_ccr,read_connector_secrets,read_fleet_secrets,read_ilm,read_pipeline,read_security,read_slm,transport_client,write_connector_secrets,write_fleet_secrets,create_snapshot,manage_behavioral_analytics,manage_ccr,manage_connector,manage_enrich,manage_ilm,manage_inference,manage_ml,manage_rollup,manage_slm,manage_watcher,monitor_data_frame_transforms,monitor_transform,manage_api_key,manage_ingest_pipelines,manage_pipeline,manage_data_frame_transforms,manage_transform,manage_security,monitor,manage,all] or a pattern over one of the available cluster actions;"
"reason": "Validation Failed: 1: unknown cluster privilege [bad_cluster_privilege]. a privilege must be either one of the predefined cluster privilege names [manage_own_api_key,manage_data_stream_global_retention,monitor_data_stream_global_retention,none,cancel_task,cross_cluster_replication,cross_cluster_search,delegate_pki,grant_api_key,manage_autoscaling,manage_index_templates,manage_logstash_pipelines,manage_oidc,manage_saml,manage_search_application,manage_search_query_rules,manage_search_synonyms,manage_service_account,manage_token,manage_user_profile,monitor_connector,monitor_enrich,monitor_inference,monitor_ml,monitor_rollup,monitor_snapshot,monitor_stats,monitor_text_structure,monitor_watcher,post_behavioral_analytics_event,read_ccr,read_connector_secrets,read_fleet_secrets,read_ilm,read_pipeline,read_security,read_slm,transport_client,write_connector_secrets,write_fleet_secrets,create_snapshot,manage_behavioral_analytics,manage_ccr,manage_connector,manage_enrich,manage_ilm,manage_inference,manage_ml,manage_rollup,manage_slm,manage_watcher,monitor_data_frame_transforms,monitor_transform,manage_api_key,manage_ingest_pipelines,manage_pipeline,manage_data_frame_transforms,manage_transform,manage_security,monitor,manage,all] or a pattern over one of the available cluster actions;"
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,7 @@ A successful call returns an object with "cluster", "index", and "remote_cluster
"monitor_ml",
"monitor_rollup",
"monitor_snapshot",
"monitor_stats",
"monitor_text_structure",
"monitor_transform",
"monitor_watcher",
Expand Down Expand Up @@ -152,7 +153,8 @@ A successful call returns an object with "cluster", "index", and "remote_cluster
"write"
],
"remote_cluster" : [
"monitor_enrich"
"monitor_enrich",
"monitor_stats"
]
}
--------------------------------------------------
Loading

0 comments on commit f423537

Please sign in to comment.