Skip to content

Commit

Permalink
Generate heading IDs
Browse files Browse the repository at this point in the history
  • Loading branch information
JoshMock committed Nov 20, 2024
1 parent df3959c commit a14b170
Show file tree
Hide file tree
Showing 500 changed files with 12,670 additions and 104 deletions.
1 change: 1 addition & 0 deletions docs/reference-async_search-delete.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.asyncSearch.delete]]
=== client.asyncSearch.delete

Delete an async search. If the asynchronous search is still running, it is cancelled. Otherwise, the saved search results are deleted. If the Elasticsearch security features are enabled, the deletion of a specific async search is restricted to: the authenticated user that submitted the original search request; users that have the `cancel_task` cluster privilege.
Expand Down
3 changes: 2 additions & 1 deletion docs/reference-async_search-get.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.asyncSearch.get]]
=== client.asyncSearch.get

Get async search results. Retrieve the results of a previously submitted asynchronous search request. If the Elasticsearch security features are enabled, access to the results of a specific async search is restricted to the user or API key that submitted it.
Expand Down Expand Up @@ -65,7 +66,7 @@ interface AsyncSearchGetRequest extends <<RequestBase>> {
++++
<pre>
++++
type AsyncSearchGetResponse<TDocument = unknown, TAggregations = Record<<<AggregateName>>, AggregationsAggregate>> = AsyncSearchAsyncSearchDocumentResponseBase<TDocument, TAggregations>
type AsyncSearchGetResponse<TDocument = unknown, TAggregations = Record<<<AggregateName>>, <<AggregationsAggregate>>>> = AsyncSearchAsyncSearchDocumentResponseBase<TDocument, TAggregations>

[pass]
++++
Expand Down
1 change: 1 addition & 0 deletions docs/reference-async_search-status.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.asyncSearch.status]]
=== client.asyncSearch.status

Get the async search status. Get the status of a previously submitted async search request given its identifier, without retrieving search results. If the Elasticsearch security features are enabled, use of this API is restricted to the `monitoring_user` role.
Expand Down
19 changes: 10 additions & 9 deletions docs/reference-async_search-submit.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.asyncSearch.submit]]
=== client.asyncSearch.submit

Run an async search. When the primary sort of the results is an indexed field, shards get sorted based on minimum and maximum value that they hold for that field. Partial results become available following the sort criteria that was requested. Warning: Asynchronous search does not support scroll or search requests that include only the suggest section. By default, Elasticsearch does not allow you to store an async search response larger than 10Mb and an attempt to do this results in an error. The maximum allowed size for a stored async search response can be set by changing the `search.max_async_search_response_size` cluster level setting.
Expand Down Expand Up @@ -58,7 +59,7 @@ interface AsyncSearchSubmitRequest extends <<RequestBase>> {
analyze_wildcard?: boolean
batched_reduce_size?: <<_long, long>>
ccs_minimize_roundtrips?: boolean
default_operator?: QueryDslOperator
default_operator?: <<QueryDslOperator>>
df?: string
expand_wildcards?: <<ExpandWildcards>>
ignore_throttled?: boolean
Expand All @@ -81,30 +82,30 @@ interface AsyncSearchSubmitRequest extends <<RequestBase>> {
_source_excludes?: <<Fields>>
_source_includes?: <<Fields>>
q?: string
aggregations?: Record<string, AggregationsAggregationContainer>
aggregations?: Record<string, <<AggregationsAggregationContainer>>>
pass:[/**] @alias aggregations */
aggs?: Record<string, AggregationsAggregationContainer>
aggs?: Record<string, <<AggregationsAggregationContainer>>>
collapse?: SearchFieldCollapse
explain?: boolean
ext?: Record<string, any>
from?: <<_integer, integer>>
highlight?: SearchHighlight
track_total_hits?: SearchTrackHits
indices_boost?: Record<<<IndexName>>, <<_double, double>>>[]
docvalue_fields?: (QueryDslFieldAndFormat | <<Field>>)[]
docvalue_fields?: (<<QueryDslFieldAndFormat>> | <<Field>>)[]
knn?: <<KnnSearch>> | <<KnnSearch>>[]
min_score?: <<_double, double>>
post_filter?: QueryDslQueryContainer
post_filter?: <<QueryDslQueryContainer>>
profile?: boolean
query?: QueryDslQueryContainer
query?: <<QueryDslQueryContainer>>
rescore?: SearchRescore | SearchRescore[]
script_fields?: Record<string, <<ScriptField>>>
search_after?: <<SortResults>>
size?: <<_integer, integer>>
slice?: <<SlicedScroll>>
sort?: <<Sort>>
_source?: SearchSourceConfig
fields?: (QueryDslFieldAndFormat | <<Field>>)[]
fields?: (<<QueryDslFieldAndFormat>> | <<Field>>)[]
suggest?: SearchSuggester
terminate_after?: <<_long, long>>
timeout?: string
Expand All @@ -113,7 +114,7 @@ interface AsyncSearchSubmitRequest extends <<RequestBase>> {
seq_no_primary_term?: boolean
stored_fields?: <<Fields>>
pit?: SearchPointInTimeReference
runtime_mappings?: MappingRuntimeFields
runtime_mappings?: <<MappingRuntimeFields>>
stats?: string[]
}

Expand All @@ -128,7 +129,7 @@ interface AsyncSearchSubmitRequest extends <<RequestBase>> {
++++
<pre>
++++
type AsyncSearchSubmitResponse<TDocument = unknown, TAggregations = Record<<<AggregateName>>, AggregationsAggregate>> = AsyncSearchAsyncSearchDocumentResponseBase<TDocument, TAggregations>
type AsyncSearchSubmitResponse<TDocument = unknown, TAggregations = Record<<<AggregateName>>, <<AggregationsAggregate>>>> = AsyncSearchAsyncSearchDocumentResponseBase<TDocument, TAggregations>

[pass]
++++
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.autoscaling.deleteAutoscalingPolicy]]
=== client.autoscaling.deleteAutoscalingPolicy

Delete an autoscaling policy. NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.autoscaling.getAutoscalingCapacity]]
=== client.autoscaling.getAutoscalingCapacity

Get the autoscaling capacity. NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported. This API gets the current autoscaling capacity based on the configured autoscaling policy. It will return information to size the cluster appropriately to the current workload. The `required_capacity` is calculated as the maximum of the `required_capacity` result of all individual deciders that are enabled for the policy. The operator should verify that the `current_nodes` match the operator’s knowledge of the cluster to avoid making autoscaling decisions based on stale or incomplete information. The response contains decider-specific information you can use to diagnose how and why autoscaling determined a certain capacity was required. This information is provided for diagnosis only. Do not use this information to make autoscaling decisions.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-autoscaling-get_autoscaling_policy.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.autoscaling.getAutoscalingPolicy]]
=== client.autoscaling.getAutoscalingPolicy

Get an autoscaling policy. NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-autoscaling-put_autoscaling_policy.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.autoscaling.putAutoscalingPolicy]]
=== client.autoscaling.putAutoscalingPolicy

Create or update an autoscaling policy. NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-bulk.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.bulk]]
=== client.bulk

Bulk index or delete documents. Performs multiple indexing or delete operations in a single API call. This reduces overhead and can greatly increase indexing speed.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-aliases.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.aliases]]
=== client.cat.aliases

Get aliases. Retrieves the cluster’s index aliases, including filter and routing information. The API does not return data stream aliases. CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-allocation.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.allocation]]
=== client.cat.allocation

Provides a snapshot of the number of shards allocated to each data node and their disk space. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-component_templates.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.componentTemplates]]
=== client.cat.componentTemplates

Get component templates. Returns information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-count.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.count]]
=== client.cat.count

Get a document count. Provides quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process. CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-fielddata.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.fielddata]]
=== client.cat.fielddata

Returns the amount of heap memory currently used by the field data cache on every data node in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-health.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.health]]
=== client.cat.health

Returns the health status of a cluster, similar to the cluster health API. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the cluster health API. This API is often used to check malfunctioning clusters. To help you track cluster health alongside log files and alerting systems, the API returns timestamps in two formats: `HH:MM:SS`, which is human-readable but includes no date information; `Unix epoch time`, which is machine-sortable and includes date information. The latter format is useful for cluster recoveries that take multiple days. You can use the cat health API to verify cluster health across multiple nodes. You also can use the API to track the recovery of a large cluster over a longer period of time.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-help.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.help]]
=== client.cat.help

Get CAT help. Returns help for the CAT APIs.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-indices.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.indices]]
=== client.cat.indices

Get index information. Returns high-level information about indices in a cluster, including backing indices for data streams. Use this request to get the following information for each index in a cluster: - shard count - document count - deleted document count - primary store size - total store size of all shards, including shard replicas These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs. CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-master.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.master]]
=== client.cat.master

Returns information about the master node, including the ID, bound IP address, and name. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-ml_data_frame_analytics.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.mlDataFrameAnalytics]]
=== client.cat.mlDataFrameAnalytics

Get data frame analytics jobs. Returns configuration and usage information about data frame analytics jobs. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-ml_datafeeds.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.mlDatafeeds]]
=== client.cat.mlDatafeeds

Get datafeeds. Returns configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster privileges to use this API. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-ml_jobs.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.mlJobs]]
=== client.cat.mlJobs

Get anomaly detection jobs. Returns configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster privileges to use this API. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-ml_trained_models.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.mlTrainedModels]]
=== client.cat.mlTrainedModels

Get trained models. Returns configuration and usage information about inference trained models. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-nodeattrs.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.nodeattrs]]
=== client.cat.nodeattrs

Returns information about custom node attributes. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-nodes.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.nodes]]
=== client.cat.nodes

Returns information about the nodes in a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-pending_tasks.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.pendingTasks]]
=== client.cat.pendingTasks

Returns cluster-level changes that have not yet been executed. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the pending cluster tasks API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-plugins.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.plugins]]
=== client.cat.plugins

Returns a list of plugins running on each node of a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-recovery.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.recovery]]
=== client.cat.recovery

Returns information about ongoing and completed shard recoveries. Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or syncing a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing. For data streams, the API returns information about the stream’s backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index recovery API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-repositories.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.repositories]]
=== client.cat.repositories

Returns the snapshot repositories for a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot repository API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-segments.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.segments]]
=== client.cat.segments

Returns low-level information about the Lucene segments in index shards. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index segments API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-shards.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.shards]]
=== client.cat.shards

Returns information about the shards in a cluster. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-snapshots.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.snapshots]]
=== client.cat.snapshots

Returns information about the snapshots stored in one or more repositories. A snapshot is a backup of an index or running Elasticsearch cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-tasks.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.tasks]]
=== client.cat.tasks

Returns information about tasks currently executing in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the task management API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-templates.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.templates]]
=== client.cat.templates

Returns information about index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-thread_pool.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.threadPool]]
=== client.cat.threadPool

Returns thread pool statistics for each node in a cluster. Returned information includes all built-in thread pools and custom thread pools. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-cat-transforms.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.cat.transforms]]
=== client.cat.transforms

Get transforms. Returns configuration and usage information about transforms. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-ccr-delete_auto_follow_pattern.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.ccr.deleteAutoFollowPattern]]
=== client.ccr.deleteAutoFollowPattern

Deletes auto-follow patterns.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-ccr-follow.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.ccr.follow]]
=== client.ccr.follow

Creates a new follower index configured to follow the referenced leader index.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-ccr-follow_info.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.ccr.followInfo]]
=== client.ccr.followInfo

Retrieves information about all follower indices, including parameters and status for each follower index
Expand Down
1 change: 1 addition & 0 deletions docs/reference-ccr-follow_stats.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.ccr.followStats]]
=== client.ccr.followStats

Retrieves follower stats. return shard-level stats about the following tasks associated with each shard for the specified indices.
Expand Down
1 change: 1 addition & 0 deletions docs/reference-ccr-forget_follower.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
////////

[discrete]
[[client.ccr.forgetFollower]]
=== client.ccr.forgetFollower

Removes the follower retention leases from the leader.
Expand Down
Loading

0 comments on commit a14b170

Please sign in to comment.