diff --git a/_install-and-configure/upgrade-opensearch/index.md b/_install-and-configure/upgrade-opensearch/index.md new file mode 100644 index 0000000000..f41aaf12dd --- /dev/null +++ b/_install-and-configure/upgrade-opensearch/index.md @@ -0,0 +1,229 @@ +--- +layout: default +title: Upgrading OpenSearch +nav_order: 4 +has_children: true +redirect_from: + - /upgrade-opensearch/index/ +--- + +# Upgrading OpenSearch + +The OpenSearch Project releases regular updates that include new features, enhancements, and bug fixes. OpenSearch uses [Semantic Versioning](https://semver.org/), which means that breaking changes are only introduced between major version releases. To learn about upcoming features and fixes, review the [OpenSearch Project Roadmap](https://github.com/orgs/opensearch-project/projects/1) on GitHub. To view a list of previous releases or to learn more about how OpenSearch uses versioning, see [Release Schedule and Maintenance Policy]({{site.url}}/releases.html). + +We recognize that users are excited about upgrading OpenSearch in order to enjoy the latest features, and we will continue to expand on these upgrade and migration documents to cover additional topics, such as upgrading OpenSearch Dashboards and preserving custom configurations, such as for plugins. To see what's coming next or to make a request for future content, leave a comment on the [upgrade and migration documentation meta issue](https://github.com/opensearch-project/documentation-website/issues/2830) in the [OpenSearch Project](https://github.com/opensearch-project) on GitHub. + +If you would like a specific process to be added or would like to contribute, [create an issue](https://github.com/opensearch-project/documentation-website/issues) on GitHub. See the [Contributor Guidelines](https://github.com/opensearch-project/documentation-website/blob/main/CONTRIBUTING.md) to learn how you can help. +{: .tip} + +## Workflow considerations + +Take time to plan the process before making any changes to your cluster. For example, consider the following questions: + +- How long will the upgrade process take? +- If your cluster is being used in production, how impactful is downtime? +- Do you have infrastructure in place to stand up the new cluster in a testing or development environment before you move it into production, or do you need to upgrade the production hosts directly? + +The answers to questions like these will help you determine which upgrade path will work best in your environment. + +At a minimum, you should: + +- [Review breaking changes](#review-breaking-changes). +- [Review the OpenSearch tools compatibility matrices](#review-the-opensearch-tools-compatibility-matrices). +- [Check plugin compatibility](#review-plugin-compatibility). +- [Back up configuration files](#back-up-configuration-files). +- [Take a snapshot](#take-a-snapshot). + +Stop any nonessential indexing before you begin the upgrade procedure to eliminate unnecessary resource demands on the cluster while you perform the upgrade. +{: .tip} + +### Review breaking changes + +It's important to determine how the new version of OpenSearch will fit into your environment. Review [Breaking changes]({{site.url}}{{site.baseurl}}/breaking-changes/) before beginning any upgrade procedures to determine whether you will need to make adjustments to your workflow. For example, upstream or downstream components might need to be modified to be compatible with an API change. + +### Review the OpenSearch tools compatibility matrices + +If your OpenSearch cluster interacts with other services in your environment, like Logstash or Beats, then you should check the [OpenSearch tools compatibility matrices]({{site.url}}{{site.baseurl}}/tools/index/#compatibility-matrices) to determine whether other components will need to be upgraded. + +### Review plugin compatibility + +Review the plugins you use to determine compatibility with the target version of OpenSearch. Official OpenSearch Project plugins can be found in the [OpenSearch Project](https://github.com/opensearch-project) repository on GitHub. If you use any third-party plugins, then you should check the documentation for those plugins to determine whether they are compatible. + +### Back up configuration files + +Mitigate the risk of data loss by backing up any important files before you start an upgrade. Generally speaking, these files will be located in either of two directories: + +- `opensearch/config` +- `opensearch-dashboards/config` + +Some examples include `opensearch.yml`, `opensearch_dashboards.yml`, plugin configuration files, and TLS certificates. Once you identify which files you want to back up, copy them to remote storage for safety. + +### Take a snapshot + +We recommend that you back up your cluster state and indexes using [snapshots]({{site.url}}{{site.baseurl}}/opensearch/snapshots/index/). If you use security features, make sure to read [A word of caution]({{site.url}}{{site.baseurl}}/security-plugin/configuration/security-admin/#a-word-of-caution) for information about backing up and restoring your security settings. + +## Upgrade methods + +Choose an appropriate method for upgrading your cluster to a new version of OpenSearch based on your requirements: + +- A [rolling upgrade](#rolling-upgrade) upgrades nodes one at a time without stopping the cluster. +- A [cluster restart upgrade](#cluster-restart-upgrade) upgrades services while the cluster is stopped. + +Upgrades spanning more than a single major version of OpenSearch will require additional effort due to the need for reindexing. For more information, refer to the [Reindex]({{site.url}}{{site.baseurl}}/api-reference/document-apis/reindex/) API. See the [Lucene version reference](#lucene-version-reference) table included later in this guide for help planning your data migration. + +### Rolling upgrade + +A rolling upgrade is a great option if you want to keep your cluster operational throughout the process. Data may continue to be ingested, analyzed, and queried as nodes are individually stopped, upgraded, and restarted. A variation of the rolling upgrade referred to as "node replacement" follows exactly the same process except that hosts and containers are not reused for the new node. You might perform node replacement if you are upgrading the underlying host(s) as well. + +OpenSearch nodes cannot join a cluster if the cluster manager is running a newer version of OpenSearch than the node requesting membership. To avoid this issue, upgrade the cluster-manager-eligible nodes last. + +See [Rolling Upgrade]({{site.url}}{{site.baseurl}}/install-and-configure/upgrade-opensearch/rolling-upgrade/) for more information about the process. + +### Cluster restart upgrade + +OpenSearch administrators might choose to perform a cluster restart upgrade for several reasons, such as if the administrator doesn't want to perform maintenance on a running cluster or if the cluster is being migrated to a different environment. + +Unlike a rolling upgrade, where only one node is offline at a time, a cluster restart upgrade requires you to stop OpenSearch and OpenSearch Dashboards on all nodes in the cluster before proceeding. After the nodes are stopped, a new version of OpenSearch is installed. Then OpenSearch is started and the cluster bootstraps to the new version. + +## Compatibility + +OpenSearch nodes are compatible with other OpenSearch nodes running any other *minor* version within the same *major* version release. For example, 1.1.0 is compatible with 1.3.7 because they are part of the same *major* version (1.x). Additionally, OpenSearch nodes and indexes are backward compatible with the previous major version. That means, for example, that an index created by an OpenSearch node running any 1.x version can be restored from a snapshot to an OpenSearch cluster running any 2.x version. + +OpenSearch 1.x nodes are compatible with nodes running Elasticsearch 7.x, but the longevity of a mixed-version environment should not extend beyond cluster upgrade activities. +{: .tip} + +Index compatibility is determined by the version of [Apache Lucene](https://lucene.apache.org/) that created the index. If an index was created by an OpenSearch cluster running version 1.0.0, then the index can be used by any other OpenSearch cluster running up to the latest 1.x or 2.x release. See the [Index compatibility reference](#index-compatibility-reference) table for Lucene versions running in OpenSearch 1.0.0 and later and [Elasticsearch](https://www.elastic.co/) 6.8 and later. + +If your upgrade path spans more than a single major version and you want to retain any existing indexes, then you can use the [Reindex]({{site.url}}{{site.baseurl}}/api-reference/document-apis/reindex/) API to make your indexes compatible with the target version of OpenSearch before upgrading. For example, if your cluster is currently running Elasticsearch 6.8 and you want to upgrade to OpenSearch 2.x, then you must first upgrade to OpenSearch 1.x, recreate your indexes using the [Reindex]({{site.url}}{{site.baseurl}}/api-reference/document-apis/reindex/) API, and finally upgrade to 2.x. One alternative to reindexing is to reingest data from the origin, such as by replaying a data stream or ingesting data from a database. + +### Index compatibility reference + +If you plan to retain old indexes after the OpenSearch version upgrade, then you might need to reindex or reingest the data. Refer to the following table for Lucene versions across recent OpenSearch and Elasticsearch releases. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Lucene VersionOpenSearch VersionElasticsearch Version
9.4.22.5.0
2.4.1
8.6
9.4.12.4.0
9.4.08.5
9.3.02.3.0
2.2.x
8.4
9.2.02.1.08.3
9.1.02.0.x8.2
9.0.08.1
8.0
8.11.17.17
8.10.11.3.x
1.2.x
7.16
8.9.01.1.07.15
7.14
8.8.21.0.07.13
8.8.07.12
8.7.07.11
7.10
8.6.27.9
8.5.17.8
7.7
8.4.07.6
8.3.07.5
8.2.07.4
8.1.07.3
8.0.07.2
7.1
7.7.36.8
+

A dash (—) indicates that there is no product version containing the specified version of Apache Lucene.

\ No newline at end of file diff --git a/_install-and-configure/upgrade-opensearch/rolling-upgrade.md b/_install-and-configure/upgrade-opensearch/rolling-upgrade.md new file mode 100644 index 0000000000..76dd248c51 --- /dev/null +++ b/_install-and-configure/upgrade-opensearch/rolling-upgrade.md @@ -0,0 +1,196 @@ +--- +layout: default +title: Rolling Upgrade +parent: Upgrading OpenSearch +nav_order: 10 +--- + +# Rolling Upgrade + +Rolling upgrades, sometimes referred to as "node replacement upgrades," can be performed on running clusters with virtually no downtime. Nodes are individually stopped and upgraded in place. Alternatively, nodes can be stopped and replaced, one at a time, by hosts running the new version. During this process you can continue to index and query data in your cluster. + +The example outputs and API responses included in this document were generated in a development environment using Docker containers. Validation was performed by upgrading an Elasticsearch 7.10.2 cluster to OpenSearch 1.3.7; however, this process can be applied to any **N→N+1** version upgrade of OpenSearch on any platform. Certain commands, such as listing running containers in Docker, are included as an aid to the reader, but the specific commands used on your host(s) will be different depending on your distribution and host operating system. + +This guide assumes that you are comfortable working from the Linux command line interface (CLI). You should understand how to input commands, navigate between directories, and edit text files. For help with [Docker](https://www.docker.com/) or [Docker Compose](https://github.com/docker/compose), refer to the official documentation on their websites. +{:.note} + +## Preparing to upgrade + +Review [Upgrading OpenSearch]({{site.url}}{{site.baseurl}}/upgrade-opensearch/index/) for recommendations about backing up your configuration files and creating a snapshot of the cluster state and indexes before you make any changes to your OpenSearch cluster. + +**Important:** OpenSearch nodes cannot be downgraded. If you need to revert the upgrade, then you will need to perform a fresh installation of OpenSearch and restore the cluster from a snapshot. Take a snapshot and store it in a remote repository before beginning the upgrade procedure. +{: .important} + +## Upgrade steps + +1. Verify the health of your OpenSearch cluster before you begin. You should resolve any index or shard allocation issues prior to upgrading to ensure that your data is preserved. A status of **green** indicates that all primary and replica shards are allocated. See [Cluster health]({{site.url}}{{site.baseurl}}/api-reference/cluster-api/cluster-health/) for more information. The following command queries the `_cluster/health` API endpoint: + ```bash + curl "http://localhost:9201/_cluster/health?pretty" + ``` + The response should look similar to the following example: + ```json + { + "cluster_name":"opensearch-dev-cluster", + "status":"green", + "timed_out":false, + "number_of_nodes":4, + "number_of_data_nodes":4, + "active_primary_shards":1, + "active_shards":4, + "relocating_shards":0, + "initializing_shards":0, + "unassigned_shards":0, + "delayed_unassigned_shards":0, + "number_of_pending_tasks":0, + "number_of_in_flight_fetch":0, + "task_max_waiting_in_queue_millis":0, + "active_shards_percent_as_number":100.0 + } + ``` +1. Disable shard replication to prevent shard replicas from being created while nodes are being taken offline. This stops the movement of Lucene index segments on nodes in your cluster. You can disable shard replication by querying the `_cluster/settings` API endpoint: + ```bash + curl -X PUT "http://localhost:9201/_cluster/settings?pretty" -H 'Content-type: application/json' -d'{"persistent":{"cluster.routing.allocation.enable":"primaries"}}' + ``` + The response should look similar to the following example: + ```json + { + "acknowledged" : true, + "persistent" : { + "cluster" : { + "routing" : { + "allocation" : { + "enable" : "primaries" + } + } + } + }, + "transient" : { } + } + ``` +1. Perform a flush operation on the cluster to commit transaction log entries to the Lucene index: + ```bash + curl -X POST "http://localhost:9201/_flush?pretty" + ``` + The response should look similar to the following example: + ```json + { + "_shards" : { + "total" : 4, + "successful" : 4, + "failed" : 0 + } + } + ``` +1. Review your cluster and identify the first node to upgrade. Eligible cluster manager nodes should be upgraded last because OpenSearch nodes can join a cluster with manager nodes running an older version, but they cannot join a cluster with all manager nodes running a newer version. +1. Query the `_cat/nodes` endpoint to identify which node was promoted to cluster manager. The following command queries `_cat/nodes` and requests only the name, version, node.role, and master headers. Note that OpenSearch 1.x versions use the term "master," which has been deprecated and replaced by "cluster_manager" in OpenSearch 2.x and later. + ```bash + curl -s "http://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" | column -t + ``` + The response should look similar to the following example: + ```bash + name version node.role master + os-node-01 7.10.2 dimr - + os-node-04 7.10.2 dimr - + os-node-03 7.10.2 dimr - + os-node-02 7.10.2 dimr * + ``` +1. Stop the node you are upgrading. Do not delete the volume associated with the container when you delete the container. The new OpenSearch container will use the existing volume. **Deleting the volume will result in data loss**. +1. Confirm that the associated node has been dismissed from the cluster by querying the `_cat/nodes` API endpoint: + ```bash + curl -s "http://localhost:9202/_cat/nodes?v&h=name,version,node.role,master" | column -t + ``` + The response should look similar to the following example: + ```bash + name version node.role master + os-node-02 7.10.2 dimr * + os-node-04 7.10.2 dimr - + os-node-03 7.10.2 dimr - + ``` + `os-node-01` is no longer listed because the container has been stopped and deleted. +1. Deploy a new container running the desired version of OpenSearch and mapped to the same volume as the container you deleted. +1. Query the `_cat/nodes` endpoint after OpenSearch is running on the new node to confirm that it has joined the cluster: + ```bash + curl -s "http://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" | column -t + ``` + The response should look similar to the following example: + ```bash + name version node.role master + os-node-02 7.10.2 dimr * + os-node-04 7.10.2 dimr - + os-node-01 7.10.2 dimr - + os-node-03 7.10.2 dimr - + ``` + In the example output, the new OpenSearch node reports a running version of `7.10.2` to the cluster. This is the result of `compatibility.override_main_response_version`, which is used when connecting to a cluster with legacy clients that check for a version. You can manually confirm the version of the node by calling the `/_nodes` API endpoint, as in the following command. Replace `` with the name of your node. See [Nodes API]({{site.url}}{{site.baseurl}}/api-reference/nodes-apis/index/) to learn more. + ``` + curl -s -X GET 'localhost:9201/_nodes/?pretty=true' | jq -r '.nodes | .[] | "\(.name) v\(.version)"' + ``` + The response should look similar to the following example: + ``` + $ curl -s -X GET 'localhost:9201/_nodes/os-node-01?pretty=true' | jq -r '.nodes | .[] | "\(.name) v\(.version)"' + os-node-01 v1.3.7 + ``` +1. Repeat steps 5 through 9 for each node in your cluster. Remember to upgrade an eligible cluster manager node last. After replacing the last node, query the `_cat/nodes` endpoint to confirm that all nodes have joined the cluster. The cluster is now bootstrapped to the new version of OpenSearch. You can verify the cluster version by querying the `_cat/nodes` API endpoint: + ```bash + curl -s "http://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" | column -t + ``` + The response should look similar to the following example: + ```bash + name version node.role master + os-node-04 1.3.7 dimr - + os-node-02 1.3.7 dimr * + os-node-01 1.3.7 dimr - + os-node-03 1.3.7 dimr - + ``` +1. Reenable shard replication: + ```bash + curl -X PUT "http://localhost:9201/_cluster/settings?pretty" -H 'Content-type: application/json' -d'{"persistent":{"cluster.routing.allocation.enable":"all"}}' + ``` + The response should look similar to the following example: + ```json + { + "acknowledged" : true, + "persistent" : { + "cluster" : { + "routing" : { + "allocation" : { + "enable" : "all" + } + } + } + }, + "transient" : { } + } + ``` +1. Confirm that the cluster is healthy: + ```bash + curl "http://localhost:9201/_cluster/health?pretty" + ``` + The response should look similar to the following example: + ```bash + { + "cluster_name" : "opensearch-dev-cluster", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 4, + "number_of_data_nodes" : 4, + "discovered_master" : true, + "active_primary_shards" : 1, + "active_shards" : 4, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 + } + ``` +1. The upgrade is now complete, and you can begin enjoying the latest features and fixes! + +### Related articles + +- [OpenSearch configuration]({{site.url}}{{site.baseurl}}/install-and-configure/configuration/) +- [Performance analyzer]({{site.url}}{{site.baseurl}}/monitoring-plugins/pa/index/) +- [Install and configure OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/install-and-configure/install-dashboards/index/) +- [About Security in OpenSearch]({{site.url}}{{site.baseurl}}/security/index/)