Skip to content

Commit

Permalink
[DOCS] Edit index operation summaries (#3268) (#3281)
Browse files Browse the repository at this point in the history
(cherry picked from commit 7d436a0)
  • Loading branch information
lcawl authored Dec 12, 2024
1 parent 1d2aa19 commit 836a578
Show file tree
Hide file tree
Showing 23 changed files with 437 additions and 128 deletions.
134 changes: 86 additions & 48 deletions output/openapi/elasticsearch-openapi.json

Large diffs are not rendered by default.

10 changes: 8 additions & 2 deletions output/openapi/elasticsearch-serverless-openapi.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

147 changes: 90 additions & 57 deletions output/schema/schema.json

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions specification/_doc_ids/table.csv
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,8 @@ index-modules-slowlog-slowlog,https://www.elastic.co/guide/en/elasticsearch/refe
index-modules,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/index-modules.html
index,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/index.html
indexing-buffer,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/indexing-buffer.html
index-modules-merge,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/index-modules-merge.html
index-templates,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/index-templates.html
indices-aliases,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/indices-aliases.html
indices-analyze,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/indices-analyze.html
indices-clearcache,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/indices-clearcache.html
Expand Down Expand Up @@ -524,6 +526,7 @@ search-aggregations-metrics-top-metrics,https://www.elastic.co/guide/en/elastics
search-aggregations-metrics-valuecount-aggregation,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/search-aggregations-metrics-valuecount-aggregation.html
search-aggregations-metrics-weight-avg-aggregation,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/search-aggregations-metrics-weight-avg-aggregation.html
search-aggregations-bucket-variablewidthhistogram-aggregation,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/search-aggregations-bucket-variablewidthhistogram-aggregation.html
search-analyzer,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/search-analyzer.html
search-count,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/search-count.html
search-explain,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/search-explain.html
search-field-caps,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/search-field-caps.html
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,9 @@ import { RequestBase } from '@_types/Base'
import { ExpandWildcards, Fields, Indices } from '@_types/common'

/**
* Clears the caches of one or more indices.
* For data streams, the API clears the caches of the stream’s backing indices.
* Clear the cache.
* Clear the cache of one or more indices.
* For data streams, the API clears the caches of the stream's backing indices.
* @rest_spec_name indices.clear_cache
* @availability stack stability=stable
* @availability serverless stability=stable visibility=private
Expand Down
25 changes: 24 additions & 1 deletion specification/indices/clone/IndicesCloneRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,30 @@ import { IndexName, Name, WaitForActiveShards } from '@_types/common'
import { Duration } from '@_types/Time'

/**
* Clones an existing index.
* Clone an index.
* Clone an existing index into a new index.
* Each original primary shard is cloned into a new primary shard in the new index.
*
* IMPORTANT: Elasticsearch does not apply index templates to the resulting index.
* The API also does not copy index metadata from the original index.
* Index metadata includes aliases, index lifecycle management phase definitions, and cross-cluster replication (CCR) follower information.
* For example, if you clone a CCR follower index, the resulting clone will not be a follower index.
*
* The clone API copies most index settings from the source index to the resulting index, with the exception of `index.number_of_replicas` and `index.auto_expand_replicas`.
* To set the number of replicas in the resulting index, configure these settings in the clone request.
*
* Cloning works as follows:
*
* * First, it creates a new target index with the same definition as the source index.
* * Then it hard-links segments from the source index into the target index. If the file system does not support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
* * Finally, it recovers the target index as though it were a closed index which had just been re-opened.
*
* IMPORTANT: Indices can only be cloned if they meet the following requirements:
*
* * The target index must not exist.
* * The source index must have the same number of primary shards as the target index.
* * The node handling the clone process must have sufficient free disk space to accommodate a second copy of the existing index.
*
* @rest_spec_name indices.clone
* @availability stack since=7.4.0 stability=stable
*/
Expand Down
19 changes: 18 additions & 1 deletion specification/indices/close/CloseIndexRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,24 @@ import { ExpandWildcards, Indices, WaitForActiveShards } from '@_types/common'
import { Duration } from '@_types/Time'

/**
* Closes an index.
* Close an index.
* A closed index is blocked for read or write operations and does not allow all operations that opened indices allow.
* It is not possible to index documents or to search for documents in a closed index.
* Closed indices do not have to maintain internal data structures for indexing or searching documents, which results in a smaller overhead on the cluster.
*
* When opening or closing an index, the master node is responsible for restarting the index shards to reflect the new state of the index.
* The shards will then go through the normal recovery process.
* The data of opened and closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
*
* You can open and close multiple indices.
* An error is thrown if the request explicitly refers to a missing index.
* This behaviour can be turned off using the `ignore_unavailable=true` parameter.
*
* By default, you must explicitly name the indices you are opening or closing.
* To open or close indices with `_all`, `*`, or other wildcard expressions, change the` action.destructive_requires_name` setting to `false`. This setting can also be changed with the cluster update settings API.
*
* Closed indices consume a significant amount of disk-space which can cause problems in managed environments.
* Closing indices can be turned off with the cluster settings API by setting `cluster.indices.close.enable` to `false`.
* @doc_id indices-close
* @rest_spec_name indices.close
* @availability stack stability=stable
Expand Down
5 changes: 4 additions & 1 deletion specification/indices/disk_usage/IndicesDiskUsageRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,10 @@ import { RequestBase } from '@_types/Base'
import { ExpandWildcards, Indices } from '@_types/common'

/**
* Analyzes the disk usage of each field of an index or data stream.
* Analyze the index disk usage.
* Analyze the disk usage of each field of an index or data stream.
* This API might not support indices created in previous Elasticsearch versions.
* The result of a small index can be inaccurate as some parts of an index might not be analyzed by the API.
* @doc_id indices-disk-usage
* @rest_spec_name indices.disk_usage
* @availability stack since=7.15.0 stability=experimental
Expand Down
9 changes: 8 additions & 1 deletion specification/indices/downsample/Request.ts
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,14 @@ import { RequestBase } from '@_types/Base'
import { IndexName } from '@_types/common'

/**
* Aggregates a time series (TSDS) index and stores pre-computed statistical summaries (`min`, `max`, `sum`, `value_count` and `avg`) for each metric field grouped by a configured time interval.
* Downsample an index.
* Aggregate a time series (TSDS) index and store pre-computed statistical summaries (`min`, `max`, `sum`, `value_count` and `avg`) for each metric field grouped by a configured time interval.
* For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index.
* All documents within an hour interval are summarized and stored as a single document in the downsample index.
*
* NOTE: Only indices in a time series data stream are supported.
* Neither field nor document level security can be defined on the source index.
* The source index must be read only (`index.blocks.write: true`).
* @doc_id indices-downsample-data-stream
* @rest_spec_name indices.downsample
* @availability stack since=8.5.0 stability=experimental
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,10 @@ import {
import { Duration } from '@_types/Time'

/**
* Returns field usage information for each shard and field of an index.
* Get field usage stats.
* Get field usage information for each shard and field of an index.
* Field usage statistics are automatically captured when queries are running on a cluster.
* A shard-level search request that accesses a given field, even if multiple times during that request, is counted as a single use.
* @rest_spec_name indices.field_usage_stats
* @availability stack since=7.15.0 stability=experimental
* @availability serverless stability=experimental visibility=private
Expand Down
12 changes: 11 additions & 1 deletion specification/indices/flush/IndicesFlushRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,17 @@ import { RequestBase } from '@_types/Base'
import { ExpandWildcards, Indices } from '@_types/common'

/**
* Flushes one or more data streams or indices.
* Flush data streams or indices.
* Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index.
* When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart.
* Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.
*
* After each operation has been flushed it is permanently stored in the Lucene index.
* This may mean that there is no need to maintain an additional copy of it in the transaction log.
* The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
*
* It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly.
* If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
* @doc_id indices-flush
* @rest_spec_name indices.flush
* @availability stack stability=stable
Expand Down
14 changes: 14 additions & 0 deletions specification/indices/forcemerge/IndicesForceMergeRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,23 @@ import { ExpandWildcards, Indices } from '@_types/common'
import { long } from '@_types/Numeric'

/**
* Force a merge.
* Perform the force merge operation on the shards of one or more indices.
* For data streams, the API forces a merge on the shards of the stream's backing indices.
*
* Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents.
* Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.
*
* WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes).
* When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone".
* These soft-deleted documents are automatically cleaned up during regular segment merges.
* But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges.
* So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance.
* If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.
* @rest_spec_name indices.forcemerge
* @availability stack since=2.1.0 stability=stable
* @availability serverless stability=stable visibility=private
* @ext_doc_id index-modules-merge
*/
export interface Request extends RequestBase {
path_parts: {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,17 @@ import { IndexName } from '@_types/common'
import { Duration } from '@_types/Time'

/**
* Promote a data stream.
* Promote a data stream from a replicated data stream managed by cross-cluster replication (CCR) to a regular data stream.
*
* With CCR auto following, a data stream from a remote cluster can be replicated to the local cluster.
* These data streams can't be rolled over in the local cluster.
* These replicated data streams roll over only if the upstream data stream rolls over.
* In the event that the remote cluster is no longer available, the data stream in the local cluster can be promoted to a regular data stream, which allows these data streams to be rolled over in the local cluster.
*
* NOTE: When promoting a data stream, ensure the local cluster has a data stream enabled index template that matches the data stream.
* If this is missing, the data stream will not be able to roll over until a matching index template is created.
* This will affect the lifecycle management of the data stream and interfere with the data stream size and retention.
* @rest_spec_name indices.promote_data_stream
* @availability stack since=7.9.0 stability=stable
*/
Expand Down
12 changes: 12 additions & 0 deletions specification/indices/put_template/IndicesPutTemplateRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,21 @@ import { Duration } from '@_types/Time'
/**
* Create or update an index template.
* Index templates define settings, mappings, and aliases that can be applied automatically to new indices.
* Elasticsearch applies templates to new indices based on an index pattern that matches the index name.
*
* IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
*
* Composable templates always take precedence over legacy templates.
* If no composable template matches a new index, matching legacy templates are applied according to their order.
*
* Index templates are only applied during index creation.
* Changes to index templates do not affect existing indices.
* Settings and mappings specified in create index API requests override any settings or mappings specified in an index template.
* @rest_spec_name indices.put_template
* @availability stack stability=stable
* @availability serverless stability=stable visibility=public
* @cluster_privileges manage_index_templates, manage
* @ext_doc_id index-templates
*/
export interface Request extends RequestBase {
path_parts: {
Expand Down
23 changes: 21 additions & 2 deletions specification/indices/recovery/IndicesRecoveryRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,27 @@ import { RequestBase } from '@_types/Base'
import { Indices } from '@_types/common'

/**
* Returns information about ongoing and completed shard recoveries for one or more indices.
* For data streams, the API returns information for the stream’s backing indices.
* Get index recovery information.
* Get information about ongoing and completed shard recoveries for one or more indices.
* For data streams, the API returns information for the stream's backing indices.
*
* Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard.
* When a shard recovery completes, the recovered shard is available for search and indexing.
*
* Recovery automatically occurs during the following processes:
*
* * When creating an index for the first time.
* * When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
* * Creation of new replica shard copies from the primary.
* * Relocation of a shard copy to a different node in the same cluster.
* * A snapshot restore operation.
* * A clone, shrink, or split operation.
*
* You can determine the cause of a shard recovery using the recovery or cat recovery APIs.
*
* The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster.
* It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist.
* This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.
* @rest_spec_name indices.recovery
* @availability stack stability=stable
* @availability serverless stability=stable visibility=private
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,23 @@ import { RequestBase } from '@_types/Base'
import { ExpandWildcards, Indices } from '@_types/common'

/**
* Reload search analyzers.
* Reload an index's search analyzers and their resources.
* For data streams, the API reloads search analyzers and resources for the stream's backing indices.
*
* IMPORTANT: After reloading the search analyzers you should clear the request cache to make sure it doesn't contain responses derived from the previous versions of the analyzer.
*
* You can use the reload search analyzers API to pick up changes to synonym files used in the `synonym_graph` or `synonym` token filter of a search analyzer.
* To be eligible, the token filter must have an `updateable` flag of `true` and only be used in search analyzers.
*
* NOTE: This API does not perform a reload for each shard of an index.
* Instead, it performs a reload for each node containing index shards.
* As a result, the total shard count returned by the API can differ from the number of index shards.
* Because reloading affects every node with an index shard, it is important to update the synonym file on every data node in the cluster--including nodes that don't contain a shard replica--before using this API.
* This ensures the synonym file is updated everywhere in the cluster in case shards are relocated in the future.
* @rest_spec_name indices.reload_search_analyzers
* @availability stack since=7.3.0 stability=stable
* @ext_doc_id search-analyzer
*/
export interface Request extends RequestBase {
path_parts: {
Expand Down
Loading

0 comments on commit 836a578

Please sign in to comment.