diff --git a/docs/reference/ccr/index.asciidoc b/docs/reference/ccr/index.asciidoc index f3180da1ae77e..c840b77d045c6 100644 --- a/docs/reference/ccr/index.asciidoc +++ b/docs/reference/ccr/index.asciidoc @@ -1,6 +1,7 @@ [role="xpack"] [[xpack-ccr]] == {ccr-cap} + With {ccr}, you can replicate indices across clusters to: * Continue handling search requests in the event of a datacenter outage diff --git a/docs/reference/data-management.asciidoc b/docs/reference/data-management.asciidoc index 4245227a1524d..35ce019259d85 100644 --- a/docs/reference/data-management.asciidoc +++ b/docs/reference/data-management.asciidoc @@ -4,31 +4,10 @@ [partintro] -- -The data you store in {es} generally falls into one of two categories: -* Content: a collection of items you want to search, such as a catalog of products -* Time series data: a stream of continuously-generated timestamped data, such as log entries +include::{es-ref-dir}/lifecycle-options.asciidoc[] -Content might be frequently updated, -but the value of the content remains relatively constant over time. -You want to be able to retrieve items quickly regardless of how old they are. - -Time series data keeps accumulating over time, so you need strategies for -balancing the value of the data against the cost of storing it. -As it ages, it tends to become less important and less-frequently accessed, -so you can move it to less expensive, less performant hardware. -For your oldest data, what matters is that you have access to the data. -It's ok if queries take longer to complete. - -To help you manage your data, {es} offers you: - -* <> ({ilm-init}) to manage both indices and data streams and it is fully customisable, and -* <> which is the built-in lifecycle of data streams and addresses the most -common lifecycle management needs. - -preview::["The built-in data stream lifecycle is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but this feature is not subject to the support SLA of official GA features."] - -**{ilm-init}** can be used to manage both indices and data streams and it allows you to: +**{ilm-init}** can be used to manage both indices and data streams. It allows you to do the following: * Define the retention period of your data. The retention period is the minimum time your data will be stored in {es}. Data older than this period can be deleted by {es}. @@ -38,12 +17,24 @@ Data older than this period can be deleted by {es}. for your older indices while reducing operating costs and maintaining search performance. * Perform <> of data stored on less-performant hardware. -**Data stream lifecycle** is less feature rich but is focused on simplicity, so it allows you to easily: +**Data stream lifecycle** is less feature rich but is focused on simplicity. It allows you to do the following: * Define the retention period of your data. The retention period is the minimum time your data will be stored in {es}. Data older than this period can be deleted by {es} at a later time. -* Improve the performance of your data stream by performing background operations that will optimise the way your data -stream is stored. +* Improve the performance of your data stream by performing background operations that will optimize the way your data stream is stored. + +**Elastic Curator** is a tool that allows you to manage your indices and snapshots using user-defined filters and predefined actions. If ILM provides the functionality to manage your index lifecycle, and you have at least a Basic license, consider using ILM in place of Curator. Many stack components make use of ILM by default. {curator-ref-current}/ilm.html[Learn more]. + +NOTE: <> is a deprecated Elasticsearch feature that allows you to manage the amount of data that is stored in your cluster, similar to the downsampling functionality of {ilm-init} and data stream lifecycle. This feature should not be used for new deployments. + +[TIP] +==== +{ilm-init} is not available on {es-serverless}. + +In an {cloud} or self-managed environment, ILM lets you automatically transition indices through data tiers according to your performance needs and retention requirements. This allows you to balance hardware costs with performance. {es-serverless} eliminates this complexity by optimizing your cluster performance for you. + +Data stream lifecycle is an optimized lifecycle tool that lets you focus on the most common lifecycle management needs, without unnecessary hardware-centric concepts like data tiers. +==== -- include::ilm/index.asciidoc[] diff --git a/docs/reference/data-store-architecture.asciidoc b/docs/reference/data-store-architecture.asciidoc new file mode 100644 index 0000000000000..1d1f251cac676 --- /dev/null +++ b/docs/reference/data-store-architecture.asciidoc @@ -0,0 +1,18 @@ += Data store architecture + +[partintro] +-- + +{es} is a distributed document store. Instead of storing information as rows of columnar data, {es} stores complex data structures that have been serialized as JSON documents. When you have multiple {es} nodes in a cluster, stored documents are distributed across the cluster and can be accessed immediately +from any node. + +The topics in this section provides information about the architecture of {es} and how it stores and retrieves data: + +* <>: Learn about the basic building blocks of an {es} cluster, including nodes, shards, primaries, and replicas. +* <>: Learn how {es} replicates read and write operations across shards and shard copies. +* <>: Learn how {es} allocates and balances shards across nodes. +-- + +include::nodes-shards.asciidoc[] +include::docs/data-replication.asciidoc[leveloffset=-1] +include::modules/shard-ops.asciidoc[] diff --git a/docs/reference/docs.asciidoc b/docs/reference/docs.asciidoc index ff2d823410a6d..1703f033ad953 100644 --- a/docs/reference/docs.asciidoc +++ b/docs/reference/docs.asciidoc @@ -1,9 +1,7 @@ [[docs]] == Document APIs -This section starts with a short introduction to {es}'s <>, followed by a detailed description of the following CRUD -APIs: +This section describes the following CRUD APIs: .Single document APIs * <> @@ -18,8 +16,6 @@ APIs: * <> * <> -include::docs/data-replication.asciidoc[] - include::docs/index_.asciidoc[] include::docs/get.asciidoc[] diff --git a/docs/reference/docs/data-replication.asciidoc b/docs/reference/docs/data-replication.asciidoc index 2c1a16c81d011..6ee266070e727 100644 --- a/docs/reference/docs/data-replication.asciidoc +++ b/docs/reference/docs/data-replication.asciidoc @@ -1,6 +1,6 @@ [[docs-replication]] -=== Reading and Writing documents +=== Reading and writing documents [discrete] ==== Introduction diff --git a/docs/reference/high-availability-overview.asciidoc b/docs/reference/high-availability-overview.asciidoc new file mode 100644 index 0000000000000..671948de8e951 --- /dev/null +++ b/docs/reference/high-availability-overview.asciidoc @@ -0,0 +1,20 @@ +Your data is important to you. Keeping it safe and available is important to Elastic. Sometimes your cluster may experience hardware failure or a power loss. To help you plan for this, {es} offers a number of features to achieve high availability despite failures. Depending on your deployment type, you might need to provision servers in different zones or configure external repositories to meet your organization's availability needs. + +* *<>* ++ +Distributed systems like Elasticsearch are designed to keep working even if some of their components have failed. An Elasticsearch cluster can continue operating normally if some of its nodes are unavailable or disconnected, as long as there are enough well-connected nodes to take over the unavailable node's responsibilities. ++ +If you're designing a smaller cluster, you might focus on making your cluster resilient to single-node failures. Designers of larger clusters must also consider cases where multiple nodes fail at the same time. +// need to improve connections to ECE, EC hosted, ECK pod/zone docs in the child topics +* *<>* ++ +To effectively distribute read and write operations across nodes, the nodes in a cluster need good, reliable connections to each other. To provide better connections, you typically co-locate the nodes in the same data center or nearby data centers. ++ +Co-locating nodes in a single location exposes you to the risk of a single outage taking your entire cluster offline. To maintain high availability, you can prepare a second cluster that can take over in case of disaster by implementing {ccr} (CCR). ++ +CCR provides a way to automatically synchronize indices from a leader cluster to a follower cluster. This cluster could be in a different data center or even a different content from the leader cluster. If the primary cluster fails, the secondary cluster can take over. ++ +TIP: You can also use CCR to create secondary clusters to serve read requests in geo-proximity to your users. +* *<>* ++ +Take snapshots of your cluster that can be restored in case of failure. \ No newline at end of file diff --git a/docs/reference/high-availability.asciidoc b/docs/reference/high-availability.asciidoc index 2f34b6bc1bb21..1400fed18be81 100644 --- a/docs/reference/high-availability.asciidoc +++ b/docs/reference/high-availability.asciidoc @@ -3,28 +3,7 @@ [partintro] -- -Your data is important to you. Keeping it safe and available is important -to {es}. Sometimes your cluster may experience hardware failure or a power -loss. To help you plan for this, {es} offers a number of features -to achieve high availability despite failures. - -* With proper planning, a cluster can be - <> to many of the - things that commonly go wrong, from the loss of a single node or network - connection right up to a zone-wide outage such as power loss. - -* You can use <> to replicate data to a remote _follower_ - cluster which may be in a different data centre or even on a different - continent from the leader cluster. The follower cluster acts as a hot - standby, ready for you to fail over in the event of a disaster so severe that - the leader cluster fails. The follower cluster can also act as a geo-replica - to serve searches from nearby clients. - -* The last line of defence against data loss is to take - <> of your cluster so that you can - restore a completely fresh copy of it elsewhere if needed. +include::{es-ref-dir}/high-availability-overview.asciidoc[] -- include::high-availability/cluster-design.asciidoc[] - -include::ccr/index.asciidoc[] diff --git a/docs/reference/index.asciidoc b/docs/reference/index.asciidoc index 18052cfb64e8f..64a271ab881ea 100644 --- a/docs/reference/index.asciidoc +++ b/docs/reference/index.asciidoc @@ -60,24 +60,30 @@ include::geospatial-analysis.asciidoc[] include::watcher/index.asciidoc[] -// cluster management - -include::monitoring/index.asciidoc[] +// production tasks -include::security/index.asciidoc[] +include::production.asciidoc[] -// production tasks +include::how-to.asciidoc[] include::high-availability.asciidoc[] -include::how-to.asciidoc[] +include::snapshot-restore/index.asciidoc[] + +include::ccr/index.asciidoc[leveloffset=-1] include::autoscaling/index.asciidoc[] -include::snapshot-restore/index.asciidoc[] +// cluster management + +include::security/index.asciidoc[] + +include::monitoring/index.asciidoc[] // reference +include::data-store-architecture.asciidoc[] + include::rest-api/index.asciidoc[] include::commands/index.asciidoc[] diff --git a/docs/reference/intro.asciidoc b/docs/reference/intro.asciidoc index e0100b1c5640b..39bc62d84b4cf 100644 --- a/docs/reference/intro.asciidoc +++ b/docs/reference/intro.asciidoc @@ -375,121 +375,3 @@ Does not yet support full-text search. | N/A |=== - -// New html page -[[scalability]] -=== Get ready for production - -Many teams rely on {es} to run their key services. To keep these services running, you can design your {es} deployment -to keep {es} available, even in case of large-scale outages. To keep it running fast, you also can design your -deployment to be responsive to production workloads. - -{es} is built to be always available and to scale with your needs. It does this using a distributed architecture. -By distributing your cluster, you can keep Elastic online and responsive to requests. - -In case of failure, {es} offers tools for cross-cluster replication and cluster snapshots that can -help you fall back or recover quickly. You can also use cross-cluster replication to serve requests based on the -geographic location of your users and your resources. - -{es} also offers security and monitoring tools to help you keep your cluster highly available. - -[discrete] -[[use-multiple-nodes-shards]] -==== Use multiple nodes and shards - -[NOTE] -==== -Nodes and shards are what make {es} distributed and scalable. - -These concepts aren’t essential if you’re just getting started. How you <> in production determines what you need to know: - -* *Self-managed {es}*: You are responsible for setting up and managing nodes, clusters, shards, and replicas. This includes -managing the underlying infrastructure, scaling, and ensuring high availability through failover and backup strategies. -* *Elastic Cloud*: Elastic can autoscale resources in response to workload changes. Choose from different deployment types -to apply sensible defaults for your use case. A basic understanding of nodes, shards, and replicas is still important. -* *Elastic Cloud Serverless*: You don’t need to worry about nodes, shards, or replicas. These resources are 100% automated -on the serverless platform, which is designed to scale with your workload. -==== - -You can add servers (_nodes_) to a cluster to increase capacity, and {es} automatically distributes your data and query load -across all of the available nodes. - -Elastic is able to distribute your data across nodes by subdividing an index into _shards_. Each index in {es} is a grouping -of one or more physical shards, where each shard is a self-contained Lucene index containing a subset of the documents in -the index. By distributing the documents in an index across multiple shards, and distributing those shards across multiple -nodes, {es} increases indexing and query capacity. - -There are two types of shards: _primaries_ and _replicas_. Each document in an index belongs to one primary shard. A replica -shard is a copy of a primary shard. Replicas maintain redundant copies of your data across the nodes in your cluster. -This protects against hardware failure and increases capacity to serve read requests like searching or retrieving a document. - -[TIP] -==== -The number of primary shards in an index is fixed at the time that an index is created, but the number of replica shards can -be changed at any time, without interrupting indexing or query operations. -==== - -Shard copies in your cluster are automatically balanced across nodes to provide scale and high availability. All nodes are -aware of all the other nodes in the cluster and can forward client requests to the appropriate node. This allows {es} -to distribute indexing and query load across the cluster. - -If you’re exploring {es} for the first time or working in a development environment, then you can use a cluster with a single node and create indices -with only one shard. However, in a production environment, you should build a cluster with multiple nodes and indices -with multiple shards to increase performance and resilience. - -// TODO - diagram - -To learn about optimizing the number and size of shards in your cluster, refer to <>. -To learn about how read and write operations are replicated across shards and shard copies, refer to <>. -To adjust how shards are allocated and balanced across nodes, refer to <>. - -[discrete] -[[ccr-disaster-recovery-geo-proximity]] -==== CCR for disaster recovery and geo-proximity - -To effectively distribute read and write operations across nodes, the nodes in a cluster need good, reliable connections -to each other. To provide better connections, you typically co-locate the nodes in the same data center or nearby data centers. - -Co-locating nodes in a single location exposes you to the risk of a single outage taking your entire cluster offline. To -maintain high availability, you can prepare a second cluster that can take over in case of disaster by implementing -cross-cluster replication (CCR). - -CCR provides a way to automatically synchronize indices from your primary cluster to a secondary remote cluster that -can serve as a hot backup. If the primary cluster fails, the secondary cluster can take over. - -You can also use CCR to create secondary clusters to serve read requests in geo-proximity to your users. - -Learn more about <> and about <>. - -[TIP] -==== -You can also take <> of your cluster that can be restored in case of failure. -==== - -[discrete] -[[security-and-monitoring]] -==== Security and monitoring - -As with any enterprise system, you need tools to secure, manage, and monitor your {es} clusters. Security, -monitoring, and administrative features that are integrated into {es} enable you to use {kibana-ref}/introduction.html[Kibana] as a -control center for managing a cluster. - -<>. - -<>. - -[discrete] -[[cluster-design]] -==== Cluster design - -{es} offers many options that allow you to configure your cluster to meet your organization’s goals, requirements, -and restrictions. You can review the following guides to learn how to tune your cluster to meet your needs: - -* <> -* <> -* <> -* <> -* <> - -Many {es} options come with different performance considerations and trade-offs. The best way to determine the -optimal configuration for your use case is through https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[testing with your own data and queries]. diff --git a/docs/reference/lifecycle-options.asciidoc b/docs/reference/lifecycle-options.asciidoc new file mode 100644 index 0000000000000..710400d646026 --- /dev/null +++ b/docs/reference/lifecycle-options.asciidoc @@ -0,0 +1,21 @@ +The data you store in {es} generally falls into one of two categories: + +* *Content*: a collection of items you want to search, such as a catalog of products +* *Time series data*: a stream of continuously-generated timestamped data, such as log entries + +*Content* might be frequently updated, +but the value of the content remains relatively constant over time. +You want to be able to retrieve items quickly regardless of how old they are. + +*Time series data* keeps accumulating over time, so you need strategies for +balancing the value of the data against the cost of storing it. +As it ages, it tends to become less important and less-frequently accessed, +so you can move it to less expensive, less performant hardware. +For your oldest data, what matters is that you have access to the data. +It's ok if queries take longer to complete. + +To help you manage your data, {es} offers you the following options: + +* <> +* <> +* {curator-ref-current}/about.html[Elastic Curator] \ No newline at end of file diff --git a/docs/reference/modules/shard-ops.asciidoc b/docs/reference/modules/shard-ops.asciidoc index c0e5ee6a220f0..66ceebcfa0319 100644 --- a/docs/reference/modules/shard-ops.asciidoc +++ b/docs/reference/modules/shard-ops.asciidoc @@ -1,5 +1,5 @@ [[shard-allocation-relocation-recovery]] -=== Shard allocation, relocation, and recovery +== Shard allocation, relocation, and recovery Each <> in Elasticsearch is divided into one or more <>. Each document in an index belongs to a single shard. @@ -12,14 +12,16 @@ Over the course of normal operation, Elasticsearch allocates shard copies to nod TIP: To learn about optimizing the number and size of shards in your cluster, refer to <>. To learn about how read and write operations are replicated across shards and shard copies, refer to <>. +[discrete] [[shard-allocation]] -==== Shard allocation +=== Shard allocation include::{es-ref-dir}/modules/shard-allocation-desc.asciidoc[] By default, the primary and replica shard copies for an index can be allocated to any node in the cluster, and may be relocated to rebalance the cluster. -===== Adjust shard allocation settings +[discrete] +==== Adjust shard allocation settings You can control how shard copies are allocated using the following settings: @@ -27,7 +29,8 @@ You can control how shard copies are allocated using the following settings: - <>: Use these settings to control how the shard copies for a specific index are allocated. For example, you might want to allocate an index to a node in a specific data tier, or to an node with specific attributes. -===== Monitor shard allocation +[discrete] +==== Monitor shard allocation If a shard copy is unassigned, it means that the shard copy is not allocated to any node in the cluster. This can happen if there are not enough nodes in the cluster to allocate the shard copy, or if the shard copy can't be allocated to any node that satisfies the shard allocation filtering rules. When a shard copy is unassigned, your cluster is considered unhealthy and returns a yellow or red cluster health status. @@ -39,12 +42,14 @@ You can use the following APIs to monitor shard allocation: <>. +[discrete] [[shard-recovery]] -==== Shard recovery +=== Shard recovery include::{es-ref-dir}/modules/shard-recovery-desc.asciidoc[] -===== Adjust shard recovery settings +[discrete] +==== Adjust shard recovery settings To control how shards are recovered, for example the resources that can be used by recovery operations, and which indices should be prioritized for recovery, you can adjust the following settings: @@ -54,21 +59,24 @@ To control how shards are recovered, for example the resources that can be used Shard recovery operations also respect general shard allocation settings. -===== Monitor shard recovery +[discrete] +==== Monitor shard recovery You can use the following APIs to monitor shard allocation: - View a list of in-progress and completed recoveries using the <> - View detailed information about a specific recovery using the <> +[discrete] [[shard-relocation]] -==== Shard relocation +=== Shard relocation Shard relocation is the process of moving shard copies from one node to another. This can happen when a node joins or leaves the cluster, or when the cluster is rebalancing. When a shard copy is relocated, it is created as a new shard copy on the target node. When the shard copy is fully allocated and recovered, the old shard copy is deleted. If the shard copy being relocated is a primary, then the new shard copy is marked as primary before the old shard copy is deleted. -===== Adjust shard relocation settings +[discrete] +==== Adjust shard relocation settings You can control how and when shard copies are relocated. For example, you can adjust the rebalancing settings that control when shard copies are relocated to balance the cluster, or the high watermark for disk-based shard allocation that can trigger relocation. These settings are part of the <>. diff --git a/docs/reference/nodes-shards.asciidoc b/docs/reference/nodes-shards.asciidoc new file mode 100644 index 0000000000000..3624ff448f06b --- /dev/null +++ b/docs/reference/nodes-shards.asciidoc @@ -0,0 +1,48 @@ +[[nodes-shards]] +== Nodes and shards + +[NOTE] +==== +Nodes and shards are what make {es} distributed and scalable. + +These concepts aren't essential if you're just getting started. How you <> in production determines what you need to know: + +* *Self-managed {es}*: You are responsible for setting up and managing nodes, clusters, shards, and replicas. This includes +managing the underlying infrastructure, scaling, and ensuring high availability through failover and backup strategies. +* *Elastic Cloud*: Elastic can autoscale resources in response to workload changes. Choose from different deployment types +to apply sensible defaults for your use case. A basic understanding of nodes, shards, and replicas is still important. +* *Elastic Cloud Serverless*: You don't need to worry about nodes, shards, or replicas. These resources are 100% automated +on the serverless platform, which is designed to scale with your workload. +==== + +You can add servers (_nodes_) to a cluster to increase capacity, and {es} automatically distributes your data and query load +across all of the available nodes. + +Elastic is able to distribute your data across nodes by subdividing an index into _shards_. Each index in {es} is a grouping +of one or more physical shards, where each shard is a self-contained Lucene index containing a subset of the documents in +the index. By distributing the documents in an index across multiple shards, and distributing those shards across multiple +nodes, {es} increases indexing and query capacity. + +There are two types of shards: _primaries_ and _replicas_. Each document in an index belongs to one primary shard. A replica +shard is a copy of a primary shard. Replicas maintain redundant copies of your data across the nodes in your cluster. +This protects against hardware failure and increases capacity to serve read requests like searching or retrieving a document. + +[TIP] +==== +The number of primary shards in an index is fixed at the time that an index is created, but the number of replica shards can +be changed at any time, without interrupting indexing or query operations. +==== + +Shard copies in your cluster are automatically balanced across nodes to provide scale and high availability. All nodes are +aware of all the other nodes in the cluster and can forward client requests to the appropriate node. This allows {es} +to distribute indexing and query load across the cluster. + +If you're exploring {es} for the first time or working in a development environment, then you can use a cluster with a single node and create indices +with only one shard. However, in a production environment, you should build a cluster with multiple nodes and indices +with multiple shards to increase performance and resilience. + +// TODO - diagram + +* To learn about optimizing the number and size of shards in your cluster, refer to <>. +* To learn about how read and write operations are replicated across shards and shard copies, refer to <>. +* To adjust how shards are allocated and balanced across nodes, refer to <>. diff --git a/docs/reference/production.asciidoc b/docs/reference/production.asciidoc new file mode 100644 index 0000000000000..51b0c69559589 --- /dev/null +++ b/docs/reference/production.asciidoc @@ -0,0 +1,215 @@ +[chapter] +[[scalability]] += Get ready for production + +`statement of when this matters goes here` + +To make Elasticsearch production-ready, you need to consider the following: + +[discrete] +== Your deployment method + +Elastic offers several methods of deploying Elasticsearch. Each method offers different levels of control over your deployment. Some methods allow you to centrally manage multiple deployments or clusters. + +Refer to the documentation for each deployment method for detailed information about the available features. + +[discrete] +=== Hosted options + +[cols="1,1,1,1",options="header"] +|=== +| Deployment method | Hosted by | Key features | Use case + +| {serverless-docs}/intro.html[*Elastic Cloud Serverless*] +| Elastic +| ?? +| ?? + +| {cloud}/ec-getting-started-trial.html[*Elastic Cloud Hosted*] +| Elastic +| ?? +| ?? + +| *Elasticsearch Add-On for Heroku* +| Elastic +| ?? +| ?? +|=== + +[discrete] +=== Advanced options + +[cols="1,1,1,1",options="header"] +|=== +| Deployment method | Hosted by | Key features | Use case + +| {eck-ref}/k8s-overview.html[*Elastic Cloud on Kubernetes*] +| Self-hosted +| ?? +| ?? + +| {ece-ref}/Elastic-Cloud-Enterprise-overview.html[*Elastic Cloud Enterprise*] +| Self-hosted +| ?? +| ?? + +| *<>* +| Self-hosted +| ?? +| ?? +|=== + + +[discrete] +== Cluster or deployment design + +{es} is built to be always available and to scale with your needs. It does this using a distributed architecture. By distributing your cluster, you can keep Elastic online and responsive to requests. Consider the following elements when you design your cluster or deployment. + +[discrete] +=== Where to start + +Many {es} options come with different performance considerations and trade-offs. The best way to determine the +optimal configuration for your use case is through https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[testing with your own data and queries]. When you understand the shape and size of your data, as well as your use case, you can make informed decisions about how to configure your cluster. + +After you understand your data, use case, and organizational priorities, you can review the guidelines in our <> topics to learn how to tune your cluster to meet your needs. + +[discrete] +=== Data retention strategy + +include::{es-ref-dir}/lifecycle-options.asciidoc[] + +You should determine how long you need to retain your data and how you will manage it. + +something about when to use which one? + +[discrete] +=== Node and shard configuration + +When you move to production, you need to introduce multiple nodes and shards to your cluster. Nodes and shards are what make Elasticsearch distributed and scalable. The size and number of these nodes and shards depends on your data, your use case, and your budget. + +The way that you manage your nodes and shards depends on your deployment method: + +* If you're using a *manual on-premise deployment*, then you need to size and manage your nodes and shards manually. + +* If you're using *Elastic Cloud Hosted* or *Elastic Cloud Enterprise*, then you can choose from different deployment types to apply sensible defaults for your use case, or set the size of your data on a per-zone, per-tier basis. These products can also autoscale resources in response to workload changes. +** *Elastic Cloud Hosted resources*: +*** {cloud}/ec-create-deployment.html[Create a hosted deployment] +*** {cloud}/ec-autoscaling.html[Deployment autoscaling] +** *Elastic Cloud Enterprise resources*: +*** {ece-ref}/ece-stack-getting-started.html[Working with deployments] +*** {ece-ref}/ece-autoscaling.html[Deployment autoscaling] + +* If you're using *Elastic Cloud on Kubernetes*, then you can define {eck-ref}/k8s-autoscaling.html[autoscaling policies] and use the {eck-ref}/k8s-stateless-autoscaling.html[Kubernetes horizontal pod autoscaler] to scale different elements in your cluster based on your workload. + +Learn more about <>. + +[discrete] +=== High availability and disaster recovery + +include::{es-ref-dir}/high-availability-overview.asciidoc[] + +// each of these topics needs to be reviewed to mark elements related/unrelated to each deployment type + +[discrete] +=== Tuning your cluster + +{es} offers many options that allow you to configure your cluster to meet your organization's goals, requirements, and restrictions. Review these guidelines to learn how to tune your cluster to meet your needs. These guidelines cover elements from hardware provision to query optimization. + +* <> +* <> +* <> +* <> +// do we need this last topic anymore? Is this the best version we have? It's not referenced anywhere. it also isn't updated to use data stream lifecycle + +// each of these topics needs to be reviewed to mark elements related/unrelated to each deployment type + +[discrete] +== Security + +The {stack} is composed of many moving parts. There are the {es} nodes that form the cluster, plus {ls} instances, {kib} instances, {beats} agents, Elastic Agents, and clients all communicating with the cluster. + +In the case of *Elastic Cloud Hosted*, *Elastic Cloud Enterprise*, or *Elastic Cloud Serverless* deployments, you also need to consider the security of the Elastic Cloud installation or organization. You also can optionally manage deployment-level user roles from the Cloud UI. + +Security comprises the following concerns: + +* *Preventing unauthorized access* with password protection, role-based access control, and IP filtering. +* *Preserving the integrity of your data* with SSL/TLS encryption. +* *Maintaining an audit trail* so you know who's doing what to your cluster and the data it stores. + +The technologies and methods that you can use to address these concerns are different depending on your deployment method. + +Review the following topics to design your security strategy: + +// these could be rearranged by "type" of security concern, but there are a LOT of topics to list if we split them that way + +[discrete] +=== Cluster and deployment security + +[cols="1,1,1,1",options="header"] +|=== +| Deployment method | Documentation + +| {serverless-docs}/intro.html[*Elastic Cloud Serverless*] +| {serverless-docs}/general-manage-organization.html[Manage users and roles]
{cloud}/ec-saml-sso.html[Configure Elastic Cloud SAML single sign-on]
{serverless-docs}/custom-roles.html[Custom roles] +// need to figure out if anything in https://www.elastic.co/guide/en/elasticsearch/reference/current/secure-cluster.html applies +// suspect they have access to anything they can configure through kibana + +| {cloud}/ec-getting-started-trial.html[*Elastic Cloud Hosted*] +| {cloud}/ec-security.html[Securing your deployment]
{cloud}/ec-organizations.html[Managing your organization] + +| *Elasticsearch Add-On for Heroku* +| https://www.elastic.co/guide/en/cloud-heroku/current/ech-security.html[Securing your deployment] + +| {eck-ref}/k8s-overview.html[*Elastic Cloud on Kubernetes*] +| https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-securing-stack.html + +| {ece-ref}/Elastic-Cloud-Enterprise-overview.html[*Elastic Cloud Enterprise*] +| {ece-ref}/ece-securing-ece.html[Secure your clusters] + +| *<>* +| +|=== + +[discrete] +=== Cloud layer security + +These pages provide information about securing your Elastic Cloud Enterprise installation or Elastic Cloud organization, as well as managing access to deployments from the Cloud UI. + +[cols="1,1,1,1",options="header"] +|=== +| Deployment method | Documentation + +| {cloud}/ec-getting-started-trial.html[*Elastic Cloud Hosted*] +| {cloud}/ec-organizations.html[Managing your organization] + +| {serverless-docs}/intro.html[*Elastic Cloud Serverless*] +| {serverless-docs}/general-manage-organization.html[Manage users and roles]
{cloud}/ec-saml-sso.html[Configure Elastic Cloud SAML single sign-on] + +| {ece-ref}/Elastic-Cloud-Enterprise-overview.html[*Elastic Cloud Enterprise*] +| {ece-ref}/ece-securing-ece.html[Securing your installation] + +|=== + +[discrete] +=== Security for additional components + +<> + + +[discrete] +== Monitoring + +As with any enterprise system, you need tools to secure, manage, and monitor your Elasticsearch clusters. Security, +monitoring, and administrative features that are integrated into Elasticsearch enable you to use {kibana-ref}/introduction.html[Kibana] as a +control center for managing a cluster. + +<>. + +https://www.elastic.co/guide/en/cloud-enterprise/current/ece-monitoring-ece.html[Monitor Elastic Cloud Enterprise] +https://www.elastic.co/guide/en/cloud-enterprise/current/ece-enable-logging-and-monitoring.html[Enable logging and monitoring in Elastic Cloud Enterprise] + + +https://www.elastic.co/guide/en/cloud/current/ec-monitoring.html[Monitor Elastic Cloud Hosted] + + +https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-stack-monitoring.html[Monitor Elastic Cloud on Kubernetes] \ No newline at end of file diff --git a/docs/reference/security/index.asciidoc b/docs/reference/security/index.asciidoc index c3f3a9295c3dc..0178c14168fbc 100644 --- a/docs/reference/security/index.asciidoc +++ b/docs/reference/security/index.asciidoc @@ -4,9 +4,9 @@ [partintro] -- -The {stack} is comprised of many moving parts. There are the {es} +The {stack} is composed of many moving parts. There are the {es} nodes that form the cluster, plus {ls} instances, {kib} instances, {beats} -agents, and clients all communicating with the cluster. To keep your cluster +agents, Elastic Agents, and clients all communicating with the cluster. To keep your cluster safe, adhere to the <>. The first principle is to run {es} with security enabled. Configuring security diff --git a/docs/reference/setup.asciidoc b/docs/reference/setup.asciidoc index a284e563917c3..80828fdbfbb02 100644 --- a/docs/reference/setup.asciidoc +++ b/docs/reference/setup.asciidoc @@ -83,8 +83,6 @@ include::modules/indices/search-settings.asciidoc[] include::settings/security-settings.asciidoc[] -include::modules/shard-ops.asciidoc[] - include::modules/indices/request_cache.asciidoc[] include::settings/snapshot-settings.asciidoc[]