From 15fe7e71a1c598eb27c8daac97f3ccc436ccb2a8 Mon Sep 17 00:00:00 2001 From: Bharathwaj G <58062316+bharath-techie@users.noreply.github.com> Date: Mon, 27 Jun 2022 14:41:39 +0530 Subject: [PATCH] Merge to pit feature branch (#3708) * Bump reactor-netty-core from 1.0.16 to 1.0.19 in /plugins/repository-azure (#3360) * Bump reactor-netty-core in /plugins/repository-azure Bumps [reactor-netty-core](https://github.com/reactor/reactor-netty) from 1.0.16 to 1.0.19. - [Release notes](https://github.com/reactor/reactor-netty/releases) - [Commits](https://github.com/reactor/reactor-netty/compare/v1.0.16...v1.0.19) --- updated-dependencies: - dependency-name: io.projectreactor.netty:reactor-netty-core dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] * [Type removal] _type removal from mocked responses of scroll hit tests (#3377) Signed-off-by: Suraj Singh * [Type removal] Remove _type deprecation from script and conditional processor (#3239) * [Type removal] Remove _type deprecation from script and conditional processor Signed-off-by: Suraj Singh * Spotless check apply Signed-off-by: Suraj Singh * [Type removal] Remove _type from _bulk yaml test, scripts, unused constants (#3372) * [Type removal] Remove redundant _type deprecation checks in bulk request Signed-off-by: Suraj Singh * [Type removal] bulk yaml tests validating deprecation on _type and removal from scripts Signed-off-by: Suraj Singh * Fix Lucene-snapshots repo for jdk 17. (#3396) Signed-off-by: Marc Handalian * Replace internal usages of 'master' term in 'server/src/internalClusterTest' directory (#2521) Signed-off-by: Tianli Feng * [REMOVE] Cleanup deprecated thread pool types (FIXED_AUTO_QUEUE_SIZE) (#3369) Signed-off-by: Andriy Redko * [Type removal] _type removal from tests of yaml tests (#3406) * [Type removal] _type removal from tests of yaml tests Signed-off-by: Suraj Singh * Fix spotless failures Signed-off-by: Suraj Singh * Fix assertion failures Signed-off-by: Suraj Singh * Fix assertion failures in DoSectionTests Signed-off-by: Suraj Singh * Add release notes for version 2.0.0 (#3410) Signed-off-by: Rabi Panda * [Upgrade] Lucene-9.2.0-snapshot-ba8c3a8 (#3416) Upgrades to latest snapshot of lucene 9.2.0 in preparation for GA release. Signed-off-by: Nicholas Walter Knize * Fix release notes for 2.0.0-rc1 version (#3418) This change removes some old commits from the 2.0.0-rc1 release notes. These commits were already released as part of 1.x releases. Add back some missing type removal commits to the 2.0.0 release notes Signed-off-by: Rabi Panda * Bump version 2.1 to Lucene 9.2 after upgrade (#3424) Bumps Version.V_2_1_0 lucene version to 9.2 after backporting upgrage. Signed-off-by: Nicholas Walter Knize * Bump com.gradle.enterprise from 3.10 to 3.10.1 (#3425) Bumps com.gradle.enterprise from 3.10 to 3.10.1. --- updated-dependencies: - dependency-name: com.gradle.enterprise dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump reactor-core from 3.4.17 to 3.4.18 in /plugins/repository-azure (#3427) Bumps [reactor-core](https://github.com/reactor/reactor-core) from 3.4.17 to 3.4.18. - [Release notes](https://github.com/reactor/reactor-core/releases) - [Commits](https://github.com/reactor/reactor-core/compare/v3.4.17...v3.4.18) --- updated-dependencies: - dependency-name: io.projectreactor:reactor-core dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] * Bump gax-httpjson from 0.101.0 to 0.103.1 in /plugins/repository-gcs (#3426) Bumps [gax-httpjson](https://github.com/googleapis/gax-java) from 0.101.0 to 0.103.1. - [Release notes](https://github.com/googleapis/gax-java/releases) - [Changelog](https://github.com/googleapis/gax-java/blob/main/CHANGELOG.md) - [Commits](https://github.com/googleapis/gax-java/commits) --- updated-dependencies: - dependency-name: com.google.api:gax-httpjson dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] * [segment replication]Introducing common Replication interfaces for segment replication and recovery code paths (#3234) * RecoveryState inherits from ReplicationState + RecoveryTarget inherits from ReplicationTarget Signed-off-by: Poojita Raj * Refactoring: mixedClusterVersion error fix + move Stage to ReplicationState Signed-off-by: Poojita Raj * pull ReplicationListener into a top level class + add javadocs + address review comments Signed-off-by: Poojita Raj * fix javadoc Signed-off-by: Poojita Raj * review changes Signed-off-by: Poojita Raj * Refactoring the hierarchy relationship between repl and recovery Signed-off-by: Poojita Raj * style fix Signed-off-by: Poojita Raj * move package common under replication Signed-off-by: Poojita Raj * rename to replication Signed-off-by: Poojita Raj * rename and doc changes Signed-off-by: Poojita Raj * [Type removal] Remove type from BulkRequestParser (#3423) * [Type removal] Remove type handling in bulk request parser Signed-off-by: Suraj Singh * [Type removal] Remove testTypesStillParsedForBulkMonitoring as it is no longer present in codebase Signed-off-by: Suraj Singh * Adding CheckpointRefreshListener to trigger when Segment replication is turned on and Primary shard refreshes (#3108) * Intial PR adding classes and tests related to checkpoint publishing Signed-off-by: Rishikesh1159 * Putting a Draft PR with all changes in classes. Testing is still not included in this commit. Signed-off-by: Rishikesh1159 * Wiring up index shard to new engine, spotless apply and removing unnecessary tests and logs Signed-off-by: Rishikesh1159 * Adding Unit test for checkpointRefreshListener Signed-off-by: Rishikesh1159 * Applying spotless check Signed-off-by: Rishikesh1159 * Fixing import statements * Signed-off-by: Rishikesh1159 * removing unused constructor in index shard Signed-off-by: Rishikesh1159 * Addressing comments from last commit Signed-off-by: Rishikesh1159 * Adding package-info.java files for two new packages Signed-off-by: Rishikesh1159 * Adding test for null checkpoint publisher and addreesing PR comments Signed-off-by: Rishikesh1159 * Add docs for indexshardtests and remove shard.refresh Signed-off-by: Rishikesh1159 * Add a new Engine implementation for replicas with segment replication enabled. (#3240) * Change fastForwardProcessedSeqNo method in LocalCheckpointTracker to persisted checkpoint. This change inverts fastForwardProcessedSeqNo to fastForwardPersistedSeqNo for use in Segment Replication. This is so that a Segrep Engine can match the logic of InternalEngine where the seqNo is incremented with each operation, but only persisted in the tracker on a flush. With Segment Replication we bump the processed number with each operation received index/delete/noOp, and invoke this method when we receive a new set of segments to bump the persisted seqNo. Signed-off-by: Marc Handalian * Extract Translog specific engine methods into an abstract class. This change extracts translog specific methods to an abstract engine class so that other engine implementations can reuse translog logic. Signed-off-by: Marc Handalian * Add a separate Engine implementation for replicas with segment replication enabled. This change adds a new engine intended to be used on replicas with segment replication enabled. This engine does not wire up an IndexWriter, but still writes all operations to a translog. The engine uses a new ReaderManager that refreshes from an externally provided SegmentInfos. Signed-off-by: Marc Handalian * Fix spotless checks. Signed-off-by: Marc Handalian * Fix :server:compileInternalClusterTestJava compilation. Signed-off-by: Marc Handalian * Fix failing test naming convention check. Signed-off-by: Marc Handalian * PR feedback. - Removed isReadOnlyReplica from overloaded constructor and added feature flag checks. - Updated log msg in NRTReplicationReaderManager - cleaned up store ref counting in NRTReplicationEngine. Signed-off-by: Marc Handalian * Fix spotless check. Signed-off-by: Marc Handalian * Remove TranslogAwareEngine and build translog in NRTReplicationEngine. Signed-off-by: Marc Handalian * Fix formatting Signed-off-by: Marc Handalian * Add missing translog methods to NRTEngine. Signed-off-by: Marc Handalian * Remove persistent seqNo check from fastForwardProcessedSeqNo. Signed-off-by: Marc Handalian * PR feedback. Signed-off-by: Marc Handalian * Add test specific to translog trimming. Signed-off-by: Marc Handalian * Javadoc check. Signed-off-by: Marc Handalian * Add failEngine calls to translog methods in NRTReplicationEngine. Roll xlog generation on replica when a new commit point is received. Signed-off-by: Marc Handalian * Rename master to cluster_manager in the XContent Parser of ClusterHealthResponse (#3432) Signed-off-by: Tianli Feng * Bump hadoop-minicluster in /test/fixtures/hdfs-fixture (#3359) Bumps hadoop-minicluster from 3.3.2 to 3.3.3. --- updated-dependencies: - dependency-name: org.apache.hadoop:hadoop-minicluster dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump avro from 1.10.2 to 1.11.0 in /plugins/repository-hdfs (#3358) * Bump avro from 1.10.2 to 1.11.0 in /plugins/repository-hdfs Bumps avro from 1.10.2 to 1.11.0. --- updated-dependencies: - dependency-name: org.apache.avro:avro dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] * Fix testSetAdditionalRolesCanAddDeprecatedMasterRole() by removing the initial assertion (#3441) Signed-off-by: Tianli Feng * Replace internal usages of 'master' term in 'server/src/test' directory (#2520) * Replace the non-inclusive terminology "master" with "cluster manager" in code comments, internal variable/method/class names, in `server/src/test` directory. * Backwards compatibility is not impacted. * Add a new unit test `testDeprecatedMasterNodeFilter()` to validate using `master:true` or `master:false` can filter the node in [Cluster Stats](https://opensearch.org/docs/latest/opensearch/rest-api/cluster-stats/) API, after the `master` role is deprecated in PR https://github.com/opensearch-project/OpenSearch/pull/2424 Signed-off-by: Tianli Feng * Removing unused method from TransportSearchAction (#3437) * Removing unused method from TransportSearchAction Signed-off-by: Ankit Jain * Set term vector flags to false for ._index_prefix field (#1901). (#3119) * Set term vector flags to false for ._index_prefix field (#1901). Signed-off-by: Vesa Pehkonen * Replaced the FieldType copy ctor with ctor for the prefix field and replaced setting the field type parameters with setIndexOptions(). (#1901) Signed-off-by: Vesa Pehkonen * Added tests for term vectors. (#1901) Signed-off-by: Vesa Pehkonen * Fixed code formatting error. Signed-off-by: Vesa Pehkonen Co-authored-by: sdp * [BUG] Fixing org.opensearch.monitor.os.OsProbeTests > testLogWarnCpuMessageOnlyOnes when cgroups are available but cgroup stats is not (#3448) Signed-off-by: Andriy Redko * [Segment Replication] Add SegmentReplicationTargetService to orchestrate replication events. (#3439) * Add SegmentReplicationTargetService to orchestrate replication events. This change introduces boilerplate classes for Segment Replication and a target service to orchestrate replication events. It also includes two refactors of peer recovery components for reuse. 1. Rename RecoveryFileChunkRequest to FileChunkRequest and extract code to handle throttling into ReplicationTarget. 2. Extracts a component to execute retryable requests over the transport layer. Signed-off-by: Marc Handalian * Code cleanup. Signed-off-by: Marc Handalian * Make SegmentReplicationTargetService component final so that it can not be extended by plugins. Signed-off-by: Marc Handalian * Bump azure-core-http-netty from 1.11.9 to 1.12.0 in /plugins/repository-azure (#3474) Bumps [azure-core-http-netty](https://github.com/Azure/azure-sdk-for-java) from 1.11.9 to 1.12.0. - [Release notes](https://github.com/Azure/azure-sdk-for-java/releases) - [Commits](https://github.com/Azure/azure-sdk-for-java/compare/azure-core-http-netty_1.11.9...azure-core_1.12.0) --- updated-dependencies: - dependency-name: com.azure:azure-core-http-netty dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Update to Apache Lucene 9.2 (#3477) Signed-off-by: Andriy Redko * Bump protobuf-java from 3.20.1 to 3.21.1 in /plugins/repository-hdfs (#3472) Signed-off-by: dependabot[bot] * [Upgrade] Lucene-9.3.0-snapshot-823df23 (#3478) Upgrades to latest snapshot of lucene 9.3.0. Signed-off-by: Nicholas Walter Knize * Filter out invalid URI and HTTP method in the error message of no handler found for a REST request (#3459) Filter out invalid URI and HTTP method of a error message, which shown when there is no handler found for a REST request sent by user, so that HTML special characters <>&"' will not shown in the error message. The error message is return as mine-type `application/json`, which can't contain active (script) content, so it's not a vulnerability. Besides, no browsers are going to render as html when the mine-type is that. While the common security scanners will raise a false-positive alarm for having HTML tags in the response without escaping the HTML special characters, so the solution only aims to satisfy the code security scanners. Signed-off-by: Tianli Feng * Support use of IRSA for repository-s3 plugin credentials (#3475) * Support use of IRSA for repository-s3 plugin credentials Signed-off-by: Andriy Redko * Address code review comments Signed-off-by: Andriy Redko * Address code review comments Signed-off-by: Andriy Redko * Bump google-auth-library-oauth2-http from 0.20.0 to 1.7.0 in /plugins/repository-gcs (#3473) * Bump google-auth-library-oauth2-http in /plugins/repository-gcs Bumps google-auth-library-oauth2-http from 0.20.0 to 1.7.0. --- updated-dependencies: - dependency-name: com.google.auth:google-auth-library-oauth2-http dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Use variable to define the version of dependency google-auth-library-java Signed-off-by: Tianli Feng Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Co-authored-by: Tianli Feng * [Segment Replication] Added source-side classes for orchestrating replication events (#3470) This change expands on the existing SegmentReplicationSource interface and its corresponding Factory class by introducing an implementation where the replication source is a primary shard (PrimaryShardReplicationSource). These code paths execute on the target. The primary shard implementation creates the requests to be send to the source/primary shard. Correspondingly, this change also defines two request classes for the GET_CHECKPOINT_INFO and GET_SEGMENT_FILES requests as well as an abstract superclass. A CopyState class has been introduced that captures point-in-time, file-level details from an IndexShard. This implementation mirrors Lucene's NRT CopyState implementation. Finally, a service class has been introduce for segment replication that runs on the source side (SegmentReplicationSourceService) which handles these two types of incoming requests. This includes private handler classes that house the logic to respond to these requests, with some functionality stubbed for now. The service class also uses a simple map to cache CopyState objects that would be needed by replication targets. Unit tests have been added/updated for all new functionality. Signed-off-by: Kartik Ganesh * [Dependency upgrade] google-oauth-client to 1.33.3 (#3500) Signed-off-by: Suraj Singh * move bash flag to set statement (#3494) Passing bash with flags to the first argument of /usr/bin/env requires its own flag to interpret it correctly. Rather than use `env -S` to split the argument, have the script `set -e` to enable the same behavior explicitly in preinst and postinst scripts. Also set `-o pipefail` for consistency. Closes: #3492 Signed-off-by: Cole White * Support use of IRSA for repository-s3 plugin credentials: added YAML Rest test case (#3499) Signed-off-by: Andriy Redko * Bump azure-storage-common from 12.15.0 to 12.16.0 in /plugins/repository-azure (#3517) * Bump azure-storage-common in /plugins/repository-azure Bumps [azure-storage-common](https://github.com/Azure/azure-sdk-for-java) from 12.15.0 to 12.16.0. - [Release notes](https://github.com/Azure/azure-sdk-for-java/releases) - [Commits](https://github.com/Azure/azure-sdk-for-java/compare/azure-storage-blob_12.15.0...azure-storage-blob_12.16.0) --- updated-dependencies: - dependency-name: com.azure:azure-storage-common dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] * Bump google-oauth-client from 1.33.3 to 1.34.0 in /plugins/discovery-gce (#3516) * Bump google-oauth-client from 1.33.3 to 1.34.0 in /plugins/discovery-gce Bumps [google-oauth-client](https://github.com/googleapis/google-oauth-java-client) from 1.33.3 to 1.34.0. - [Release notes](https://github.com/googleapis/google-oauth-java-client/releases) - [Changelog](https://github.com/googleapis/google-oauth-java-client/blob/main/CHANGELOG.md) - [Commits](https://github.com/googleapis/google-oauth-java-client/compare/v1.33.3...v1.34.0) --- updated-dependencies: - dependency-name: com.google.oauth-client:google-oauth-client dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] * Fix the support of RestClient Node Sniffer for version 2.x and update tests (#3487) Fix the support of RestClient Node Sniffer for OpenSearch 2.x, and update unit tests for OpenSearch. The current code contains the logic to be compatible with Elasticsearch 2.x version, which is conflict with OpenSearch 2.x, so removed that part of legacy code. * Update the script create_test_nodes_info.bash to dump the response of Nodes Info API GET _nodes/http for OpenSearch 1.0 and 2.0 version, which used for unit test. * Remove the support of Elasticsearch version 2.x for the Sniffer * Update unit test to validate the Sniffer compatible with OpenSearch 1.x and 2.x * Update the API response parser to meet the array notation (in ES 6.1 and above) for the node attributes setting. It will result the value of `node.attr` setting will not be parsed as array in the Sniffer, when using the Sniffer on cluster in Elasticsearch 6.0 and above. * Replace "master" node role with "cluster_manager" in unit test Signed-off-by: Tianli Feng * Bump com.diffplug.spotless from 6.6.1 to 6.7.0 (#3513) Bumps com.diffplug.spotless from 6.6.1 to 6.7.0. --- updated-dependencies: - dependency-name: com.diffplug.spotless dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump guava from 18.0 to 23.0 in /plugins/ingest-attachment (#3357) * Bump guava from 18.0 to 23.0 in /plugins/ingest-attachment Bumps [guava](https://github.com/google/guava) from 18.0 to 23.0. - [Release notes](https://github.com/google/guava/releases) - [Commits](https://github.com/google/guava/compare/v18.0...v23.0) --- updated-dependencies: - dependency-name: com.google.guava:guava dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Add more ingorance of using internal java API sun.misc.Unsafe Signed-off-by: Tianli Feng Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Co-authored-by: Tianli Feng * Added bwc version 2.0.1 (#3452) Signed-off-by: Kunal Kotwani Co-authored-by: opensearch-ci-bot * Add release notes for 1.3.3 (#3549) Signed-off-by: Xue Zhou * [Upgrade] Lucene-9.3.0-snapshot-b7231bb (#3537) Upgrades to latest snapshot of lucene 9.3; including reducing maxFullFlushMergeWaitMillis in LuceneTest.testWrapLiveDocsNotExposeAbortedDocuments to 0 ms to ensure aborted docs are not merged away in the test with the new mergeOnRefresh default policy. Signed-off-by: Nicholas Walter Knize * [Remote Store] Upload segments to remote store post refresh (#3460) * Add RemoteDirectory interface to copy segment files to/from remote store Signed-off-by: Sachin Kale Co-authored-by: Sachin Kale * Add index level setting for remote store Signed-off-by: Sachin Kale Co-authored-by: Sachin Kale * Add RemoteDirectoryFactory and use RemoteDirectory instance in RefreshListener Co-authored-by: Sachin Kale Signed-off-by: Sachin Kale * Upload segment to remote store post refresh Signed-off-by: Sachin Kale Co-authored-by: Sachin Kale * Fixing VerifyVersionConstantsIT test failure (#3574) Signed-off-by: Andriy Redko * Bump jettison from 1.4.1 to 1.5.0 in /plugins/discovery-azure-classic (#3571) * Bump jettison from 1.4.1 to 1.5.0 in /plugins/discovery-azure-classic Bumps [jettison](https://github.com/jettison-json/jettison) from 1.4.1 to 1.5.0. - [Release notes](https://github.com/jettison-json/jettison/releases) - [Commits](https://github.com/jettison-json/jettison/compare/jettison-1.4.1...jettison-1.5.0) --- updated-dependencies: - dependency-name: org.codehaus.jettison:jettison dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] * Bump google-api-services-storage from v1-rev20200814-1.30.10 to v1-rev20220608-1.32.1 in /plugins/repository-gcs (#3573) * Bump google-api-services-storage in /plugins/repository-gcs Bumps google-api-services-storage from v1-rev20200814-1.30.10 to v1-rev20220608-1.32.1. --- updated-dependencies: - dependency-name: com.google.apis:google-api-services-storage dependency-type: direct:production ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Upgrade Google HTTP Client to 1.42.0 Signed-off-by: Xue Zhou Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Co-authored-by: Xue Zhou * Add flat_skew setting to node overload decider (#3563) * Add flat_skew setting to node overload decider Signed-off-by: Rishab Nahata * Bump xmlbeans from 5.0.3 to 5.1.0 in /plugins/ingest-attachment (#3572) * Bump xmlbeans from 5.0.3 to 5.1.0 in /plugins/ingest-attachment Bumps xmlbeans from 5.0.3 to 5.1.0. --- updated-dependencies: - dependency-name: org.apache.xmlbeans:xmlbeans dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] * Bump google-oauth-client from 1.34.0 to 1.34.1 in /plugins/discovery-gce (#3570) * Bump google-oauth-client from 1.34.0 to 1.34.1 in /plugins/discovery-gce Bumps [google-oauth-client](https://github.com/googleapis/google-oauth-java-client) from 1.34.0 to 1.34.1. - [Release notes](https://github.com/googleapis/google-oauth-java-client/releases) - [Changelog](https://github.com/googleapis/google-oauth-java-client/blob/main/CHANGELOG.md) - [Commits](https://github.com/googleapis/google-oauth-java-client/compare/v1.34.0...v1.34.1) --- updated-dependencies: - dependency-name: com.google.oauth-client:google-oauth-client dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] * Fix for bug showing incorrect awareness attributes count in AwarenessAllocationDecider (#3428) * Fix for bug showing incorrect awareness attributes count in AwarenessAllocationDecider Signed-off-by: Anshu Agarwal * Added bwc version 1.3.4 (#3552) Signed-off-by: GitHub Co-authored-by: opensearch-ci-bot * Support dynamic node role (#3436) * Support unknown node role Currently OpenSearch only supports several built-in nodes like data node role. If specify unknown node role, OpenSearch node will fail to start. This limit how to extend OpenSearch to support some extension function. For example, user may prefer to run ML tasks on some dedicated node which doesn't serve as any built-in node roles. So the ML tasks won't impact OpenSearch core function. This PR removed the limitation and user can specify any node role and OpenSearch will start node correctly with that unknown role. This opens the door for plugin developer to run specific tasks on dedicated nodes. Issue: https://github.com/opensearch-project/OpenSearch/issues/2877 Signed-off-by: Yaliang Wu * fix cat nodes rest API spec Signed-off-by: Yaliang Wu * fix mixed cluster IT failure Signed-off-by: Yaliang Wu * add DynamicRole Signed-off-by: Yaliang Wu * change generator method name Signed-off-by: Yaliang Wu * fix failed docker test Signed-off-by: Yaliang Wu * transform role name to lower case to avoid confusion Signed-off-by: Yaliang Wu * transform the node role abbreviation to lower case Signed-off-by: Yaliang Wu * fix checkstyle Signed-off-by: Yaliang Wu * add test for case-insensitive role name change Signed-off-by: Yaliang Wu * Rename package 'o.o.action.support.master' to 'o.o.action.support.clustermanager' (#3556) * Rename package org.opensearch.action.support.master to org.opensearch.action.support.clustermanager Signed-off-by: Tianli Feng * Rename classes with master term in the package org.opensearch.action.support.master Signed-off-by: Tianli Feng * Deprecate classes in org.opensearch.action.support.master Signed-off-by: Tianli Feng * Remove pakcage o.o.action.support.master Signed-off-by: Tianli Feng * Move package-info back Signed-off-by: Tianli Feng * Move package-info to new folder Signed-off-by: Tianli Feng * Correct the package-info Signed-off-by: Tianli Feng * Fixing flakiness of ShuffleForcedMergePolicyTests (#3591) Signed-off-by: Andriy Redko * Deprecate classes in org.opensearch.action.support.master (#3593) Signed-off-by: Tianli Feng * Add release notes for version 2.0.1 (#3595) Signed-off-by: Kunal Kotwani * Fix NPE when minBound/maxBound is not set before being called. (#3605) Signed-off-by: George Apaaboah * Added bwc version 2.0.2 (#3613) Co-authored-by: opensearch-ci-bot * Fix false positive query timeouts due to using cached time (#3454) * Fix false positive query timeouts due to using cached time Signed-off-by: Ahmad AbuKhalil * delegate nanoTime call to SearchContext Signed-off-by: Ahmad AbuKhalil * add override to SearchContext getRelativeTimeInMillis to force non cached time Signed-off-by: Ahmad AbuKhalil * Fix random gradle check failure issue 3584. (#3627) * [Segment Replication] Add components for segment replication to perform file copy. (#3525) * Add components for segment replication to perform file copy. This change adds the required components to SegmentReplicationSourceService to initiate copy and react to lifecycle events. Along with new components it refactors common file copy code from RecoverySourceHandler into reusable pieces. Signed-off-by: Marc Handalian * Deprecate public methods and variables with master term in package 'org.opensearch.action.support.master' (#3617) Signed-off-by: Tianli Feng * Add replication orchestration for a single shard (#3533) * implement segment replication target Signed-off-by: Poojita Raj * test added Signed-off-by: Poojita Raj * changes to tests + finalizeReplication Signed-off-by: Poojita Raj * fix style check Signed-off-by: Poojita Raj * addressing comments + fix gradle check Signed-off-by: Poojita Raj * added test + addressed review comments Signed-off-by: Poojita Raj * [BUG] opensearch crashes on closed client connection before search reply (#3626) * [BUG] opensearch crashes on closed client connection before search reply Signed-off-by: Andriy Redko * Addressing code review comments Signed-off-by: Andriy Redko * Add all deprecated method in the package with new name 'org.opensearch.action.support.clustermanager' (#3644) Signed-off-by: Tianli Feng * Introduce TranslogManager implementations decoupled from the Engine (#3638) * Introduce decoupled translog manager interfaces Signed-off-by: Bukhtawar Khan * Adding onNewCheckpoint to Start Replication on Replica Shard when Segment Replication is turned on (#3540) * Adding onNewCheckpoint and it's test to start replication. SCheck for latestcheckpoint and replaying logic is removed from this commit and will be added in a different PR Signed-off-by: Rishikesh1159 * Changing binding/inject logic and addressing comments from PR Signed-off-by: Rishikesh1159 * Applying spotless check Signed-off-by: Rishikesh1159 * Moving shouldProcessCheckpoint() to IndexShard, and removing some trace logs Signed-off-by: Rishikesh1159 * applying spotlessApply Signed-off-by: Rishikesh1159 * Adding more info to log statement in targetservice class Signed-off-by: Rishikesh1159 * applying spotlessApply Signed-off-by: Rishikesh1159 * Addressing comments on PR Signed-off-by: Rishikesh1159 * Adding teardown() in SegmentReplicationTargetServiceTests. Signed-off-by: Rishikesh1159 * fixing testShouldProcessCheckpoint() in SegmentReplicationTargetServiceTests Signed-off-by: Rishikesh1159 * Removing CheckpointPublisherProvider in IndicesModule Signed-off-by: Rishikesh1159 * spotless check apply Signed-off-by: Rishikesh1159 * Remove class org.opensearch.action.support.master.AcknowledgedResponse (#3662) * Remove class org.opensearch.action.support.master.AcknowledgedResponse Signed-off-by: Tianli Feng * Remove class org.opensearch.action.support.master.AcknowledgedRequest RequestBuilder ShardsAcknowledgedResponse Signed-off-by: Tianli Feng * Restore AcknowledgedResponse and AcknowledgedRequest to package org.opensearch.action.support.master (#3669) Signed-off-by: Tianli Feng * [BUG] Custom POM configuration for ZIP publication produces duplicit tags (url, scm) (#3656) * [BUG] Custom POM configuration for ZIP publication produces duplicit tags (url, scm) Signed-off-by: Andriy Redko * Added test case for pluginZip with POM Signed-off-by: Andriy Redko * Support both Gradle 6.8.x and Gradle 7.4.x Signed-off-by: Andriy Redko * Adding 2.2.0 Bwc version to main (#3673) * Upgraded to t-digest 3.3. (#3634) * Revert renaming method onMaster() and offMaster() in interface LocalNodeMasterListener (#3686) Signed-off-by: Tianli Feng * Upgrading AWS SDK dependency for native plugins (#3694) * Merge branch 'feature/point_in_time' of https://github.com/opensearch-project/OpenSearch into fb Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Co-authored-by: Suraj Singh Co-authored-by: Marc Handalian Co-authored-by: Tianli Feng Co-authored-by: Andriy Redko Co-authored-by: Rabi Panda Co-authored-by: Nick Knize Co-authored-by: Poojita Raj Co-authored-by: Rishikesh Pasham <62345295+Rishikesh1159@users.noreply.github.com> Co-authored-by: Ankit Jain Co-authored-by: vpehkone <101240162+vpehkone@users.noreply.github.com> Co-authored-by: sdp Co-authored-by: Kartik Ganesh Co-authored-by: Cole White <42356806+shdubsh@users.noreply.github.com> Co-authored-by: opensearch-trigger-bot[bot] <98922864+opensearch-trigger-bot[bot]@users.noreply.github.com> Co-authored-by: opensearch-ci-bot Co-authored-by: Xue Zhou <85715413+xuezhou25@users.noreply.github.com> Co-authored-by: Sachin Kale Co-authored-by: Sachin Kale Co-authored-by: Xue Zhou Co-authored-by: Rishab Nahata Co-authored-by: Anshu Agarwal Co-authored-by: Yaliang Wu Co-authored-by: Kunal Kotwani Co-authored-by: George Apaaboah <35894485+GeorgeAp@users.noreply.github.com> Co-authored-by: Ahmad AbuKhalil <105249973+aabukhalil@users.noreply.github.com> Co-authored-by: Bukhtawar Khan Co-authored-by: Sarat Vemulapalli Co-authored-by: Daniel (dB.) Doubrovkine --- .ci/bwcVersions | 1 + .../org/opensearch/gradle/PublishPlugin.java | 50 ++- .../gradle/pluginzip/PublishTests.java | 118 +++++- buildSrc/version.properties | 3 + .../org/opensearch/client/ClusterClient.java | 2 +- .../client/ClusterRequestConverters.java | 6 +- .../org/opensearch/client/IndicesClient.java | 2 +- .../client/IndicesRequestConverters.java | 16 +- .../org/opensearch/client/IngestClient.java | 2 +- .../client/IngestRequestConverters.java | 6 +- .../opensearch/client/RequestConverters.java | 6 +- .../client/RestHighLevelClient.java | 2 +- .../org/opensearch/client/SnapshotClient.java | 2 +- .../client/SnapshotRequestConverters.java | 22 +- .../client/indices/CloseIndexResponse.java | 2 +- .../client/indices/CreateIndexResponse.java | 2 +- .../indices/rollover/RolloverResponse.java | 2 +- .../opensearch/client/ClusterClientIT.java | 2 +- .../client/ClusterRequestConvertersTests.java | 6 +- .../opensearch/client/IndicesClientIT.java | 2 +- .../client/IndicesRequestConvertersTests.java | 2 +- .../org/opensearch/client/IngestClientIT.java | 2 +- .../client/IngestRequestConvertersTests.java | 2 +- .../client/RequestConvertersTests.java | 6 +- .../org/opensearch/client/SnapshotIT.java | 2 +- .../SnapshotRequestConvertersTests.java | 4 +- .../opensearch/client/StoredScriptsIT.java | 4 +- .../core/AcknowledgedResponseTests.java | 8 +- .../ClusterClientDocumentationIT.java | 14 +- .../IndicesClientDocumentationIT.java | 38 +- .../IngestClientDocumentationIT.java | 14 +- .../SnapshotClientDocumentationIT.java | 38 +- .../StoredScriptsDocumentationIT.java | 14 +- .../indices/CloseIndexResponseTests.java | 4 +- .../url/URLSnapshotRestoreIT.java | 2 +- .../azure/classic/AzureSimpleTests.java | 8 +- .../classic/AzureTwoStartedNodesTests.java | 8 +- plugins/discovery-ec2/build.gradle | 4 - .../aws-java-sdk-core-1.11.749.jar.sha1 | 1 - .../aws-java-sdk-core-1.12.247.jar.sha1 | 1 + .../aws-java-sdk-ec2-1.11.749.jar.sha1 | 1 - .../aws-java-sdk-ec2-1.12.247.jar.sha1 | 1 + .../discovery/gce/GceDiscoverTests.java | 4 +- .../index/mapper/size/SizeMappingIT.java | 2 +- .../AzureStorageCleanupThirdPartyTests.java | 2 +- .../GoogleCloudStorageThirdPartyTests.java | 2 +- .../hdfs/HdfsRepositoryTests.java | 2 +- .../repositories/hdfs/HdfsTests.java | 2 +- plugins/repository-s3/build.gradle | 9 +- .../aws-java-sdk-core-1.11.749.jar.sha1 | 1 - .../aws-java-sdk-core-1.12.247.jar.sha1 | 1 + .../aws-java-sdk-s3-1.11.749.jar.sha1 | 1 - .../aws-java-sdk-s3-1.12.247.jar.sha1 | 1 + .../aws-java-sdk-sts-1.11.749.jar.sha1 | 1 - .../aws-java-sdk-sts-1.12.247.jar.sha1 | 1 + .../licenses/jmespath-java-1.11.749.jar.sha1 | 1 - .../licenses/jmespath-java-1.12.247.jar.sha1 | 1 + .../s3/S3RepositoryThirdPartyTests.java | 2 +- .../180_percentiles_tdigest_metric.yml | 149 +++++-- server/build.gradle | 2 +- server/licenses/t-digest-3.2.jar.sha1 | 1 - server/licenses/t-digest-3.3.jar.sha1 | 1 + ...ansportClusterStateActionDisruptionIT.java | 8 +- .../admin/indices/create/CreateIndexIT.java | 2 +- .../datastream/DataStreamTestCase.java | 2 +- .../admin/indices/exists/IndicesExistsIT.java | 2 +- .../action/bulk/BulkIntegrationIT.java | 2 +- .../opensearch/aliases/IndexAliasesIT.java | 2 +- .../org/opensearch/blocks/SimpleBlocksIT.java | 2 +- .../opensearch/cluster/ClusterHealthIT.java | 2 +- .../SpecificClusterManagerNodesIT.java | 6 +- .../coordination/RareClusterStateIT.java | 2 +- .../cluster/shards/ClusterShardLimitIT.java | 2 +- .../index/seqno/RetentionLeaseIT.java | 2 +- .../indices/IndicesOptionsIntegrationIT.java | 2 +- .../mapping/UpdateMappingIntegrationIT.java | 4 +- .../state/CloseWhileRelocatingShardsIT.java | 2 +- .../indices/state/OpenCloseIndexIT.java | 2 +- .../org/opensearch/ingest/IngestClientIT.java | 2 +- ...gestProcessorNotInstalledOnAllNodesIT.java | 2 +- .../suggest/CompletionSuggestSearchIT.java | 2 +- .../opensearch/snapshots/CloneSnapshotIT.java | 2 +- .../snapshots/ConcurrentSnapshotsIT.java | 2 +- .../DedicatedClusterSnapshotRestoreIT.java | 2 +- .../opensearch/snapshots/RepositoriesIT.java | 2 +- .../SharedClusterSnapshotRestoreIT.java | 2 +- .../org/opensearch/OpenSearchException.java | 7 + .../src/main/java/org/opensearch/Version.java | 3 +- ...ansportClusterAllocationExplainAction.java | 2 +- ...nsportAddVotingConfigExclusionsAction.java | 2 +- ...portClearVotingConfigExclusionsAction.java | 2 +- .../cluster/health/ClusterHealthRequest.java | 4 +- .../health/TransportClusterHealthAction.java | 9 +- .../cleanup/CleanupRepositoryRequest.java | 2 +- .../TransportCleanupRepositoryAction.java | 2 +- .../delete/DeleteRepositoryAction.java | 2 +- .../delete/DeleteRepositoryRequest.java | 2 +- .../DeleteRepositoryRequestBuilder.java | 4 +- .../TransportDeleteRepositoryAction.java | 4 +- .../get/TransportGetRepositoriesAction.java | 2 +- .../repositories/put/PutRepositoryAction.java | 2 +- .../put/PutRepositoryRequest.java | 2 +- .../put/PutRepositoryRequestBuilder.java | 4 +- .../put/TransportPutRepositoryAction.java | 4 +- .../TransportVerifyRepositoryAction.java | 2 +- .../verify/VerifyRepositoryRequest.java | 2 +- .../reroute/ClusterRerouteRequest.java | 6 +- .../reroute/ClusterRerouteRequestBuilder.java | 2 +- .../reroute/ClusterRerouteResponse.java | 2 +- .../TransportClusterRerouteAction.java | 2 +- .../ClusterUpdateSettingsRequest.java | 2 +- .../ClusterUpdateSettingsRequestBuilder.java | 2 +- .../ClusterUpdateSettingsResponse.java | 2 +- .../TransportClusterUpdateSettingsAction.java | 2 +- .../TransportClusterSearchShardsAction.java | 2 +- .../snapshots/clone/CloneSnapshotAction.java | 2 +- .../clone/CloneSnapshotRequestBuilder.java | 2 +- .../clone/TransportCloneSnapshotAction.java | 8 +- .../create/CreateSnapshotRequest.java | 6 +- .../create/TransportCreateSnapshotAction.java | 2 +- .../delete/DeleteSnapshotAction.java | 2 +- .../delete/DeleteSnapshotRequestBuilder.java | 2 +- .../delete/TransportDeleteSnapshotAction.java | 4 +- .../get/TransportGetSnapshotsAction.java | 2 +- .../TransportRestoreSnapshotAction.java | 2 +- .../TransportSnapshotsStatusAction.java | 4 +- .../state/TransportClusterStateAction.java | 2 +- .../DeleteStoredScriptAction.java | 2 +- .../DeleteStoredScriptRequest.java | 2 +- .../DeleteStoredScriptRequestBuilder.java | 4 +- .../storedscripts/PutStoredScriptAction.java | 2 +- .../storedscripts/PutStoredScriptRequest.java | 2 +- .../PutStoredScriptRequestBuilder.java | 4 +- .../TransportDeleteStoredScriptAction.java | 9 +- .../TransportGetStoredScriptAction.java | 7 +- .../TransportPutStoredScriptAction.java | 9 +- .../TransportPendingClusterTasksAction.java | 2 +- .../indices/alias/IndicesAliasesAction.java | 2 +- .../indices/alias/IndicesAliasesRequest.java | 2 +- .../alias/IndicesAliasesRequestBuilder.java | 4 +- .../alias/TransportIndicesAliasesAction.java | 6 +- .../alias/get/TransportGetAliasesAction.java | 2 +- .../indices/close/CloseIndexRequest.java | 2 +- .../close/CloseIndexRequestBuilder.java | 2 +- .../indices/close/CloseIndexResponse.java | 2 +- .../close/TransportCloseIndexAction.java | 9 +- .../indices/create/AutoCreateAction.java | 10 +- .../indices/create/CreateIndexRequest.java | 2 +- .../create/CreateIndexRequestBuilder.java | 2 +- .../indices/create/CreateIndexResponse.java | 2 +- .../create/TransportCreateIndexAction.java | 4 +- .../delete/DeleteDanglingIndexAction.java | 2 +- .../delete/DeleteDanglingIndexRequest.java | 2 +- .../TransportDeleteDanglingIndexAction.java | 4 +- .../ImportDanglingIndexAction.java | 2 +- .../ImportDanglingIndexRequest.java | 2 +- .../TransportImportDanglingIndexAction.java | 2 +- .../datastream/CreateDataStreamAction.java | 8 +- .../datastream/DeleteDataStreamAction.java | 6 +- .../datastream/GetDataStreamAction.java | 2 +- .../indices/delete/DeleteIndexAction.java | 2 +- .../indices/delete/DeleteIndexRequest.java | 2 +- .../delete/DeleteIndexRequestBuilder.java | 4 +- .../delete/TransportDeleteIndexAction.java | 6 +- .../indices/TransportIndicesExistsAction.java | 2 +- .../indices/get/TransportGetIndexAction.java | 2 +- .../get/TransportGetMappingsAction.java | 2 +- .../mapping/put/AutoPutMappingAction.java | 2 +- .../indices/mapping/put/PutMappingAction.java | 2 +- .../mapping/put/PutMappingRequest.java | 4 +- .../mapping/put/PutMappingRequestBuilder.java | 4 +- .../put/TransportAutoPutMappingAction.java | 4 +- .../put/TransportPutMappingAction.java | 6 +- .../admin/indices/open/OpenIndexRequest.java | 2 +- .../indices/open/OpenIndexRequestBuilder.java | 2 +- .../admin/indices/open/OpenIndexResponse.java | 2 +- .../open/TransportOpenIndexAction.java | 4 +- .../readonly/AddIndexBlockRequest.java | 2 +- .../readonly/AddIndexBlockRequestBuilder.java | 2 +- .../readonly/AddIndexBlockResponse.java | 2 +- .../TransportAddIndexBlockAction.java | 6 +- .../rollover/MetadataRolloverService.java | 2 +- .../indices/rollover/RolloverRequest.java | 2 +- .../indices/rollover/RolloverResponse.java | 2 +- .../rollover/TransportRolloverAction.java | 6 +- .../get/TransportGetSettingsAction.java | 2 +- .../put/TransportUpdateSettingsAction.java | 6 +- .../settings/put/UpdateSettingsAction.java | 2 +- .../settings/put/UpdateSettingsRequest.java | 6 +- .../put/UpdateSettingsRequestBuilder.java | 4 +- .../TransportIndicesShardStoresAction.java | 2 +- .../admin/indices/shrink/ResizeRequest.java | 2 +- .../indices/shrink/ResizeRequestBuilder.java | 2 +- .../indices/shrink/TransportResizeAction.java | 8 +- .../delete/DeleteComponentTemplateAction.java | 2 +- .../DeleteComposableIndexTemplateAction.java | 2 +- .../delete/DeleteIndexTemplateAction.java | 2 +- .../DeleteIndexTemplateRequestBuilder.java | 2 +- ...ransportDeleteComponentTemplateAction.java | 6 +- ...rtDeleteComposableIndexTemplateAction.java | 6 +- .../TransportDeleteIndexTemplateAction.java | 6 +- .../TransportGetComponentTemplateAction.java | 2 +- ...sportGetComposableIndexTemplateAction.java | 2 +- .../get/TransportGetIndexTemplatesAction.java | 2 +- .../TransportSimulateIndexTemplateAction.java | 2 +- .../post/TransportSimulateTemplateAction.java | 2 +- .../put/PutComponentTemplateAction.java | 2 +- .../put/PutComposableIndexTemplateAction.java | 2 +- .../template/put/PutIndexTemplateAction.java | 2 +- .../put/PutIndexTemplateRequestBuilder.java | 2 +- .../TransportPutComponentTemplateAction.java | 6 +- ...sportPutComposableIndexTemplateAction.java | 6 +- .../put/TransportPutIndexTemplateAction.java | 6 +- .../post/TransportUpgradeSettingsAction.java | 6 +- .../upgrade/post/UpgradeSettingsAction.java | 2 +- .../upgrade/post/UpgradeSettingsRequest.java | 2 +- .../action/bulk/TransportBulkAction.java | 2 +- .../action/ingest/DeletePipelineAction.java | 2 +- .../action/ingest/DeletePipelineRequest.java | 2 +- .../ingest/DeletePipelineRequestBuilder.java | 2 +- .../ingest/DeletePipelineTransportAction.java | 4 +- .../ingest/GetPipelineTransportAction.java | 2 +- .../action/ingest/PutPipelineAction.java | 2 +- .../action/ingest/PutPipelineRequest.java | 2 +- .../ingest/PutPipelineRequestBuilder.java | 2 +- .../ingest/PutPipelineTransportAction.java | 4 +- .../search/AbstractSearchAsyncAction.java | 12 +- .../action/search/TransportSearchAction.java | 76 ++++ .../clustermanager/AcknowledgedRequest.java | 105 ----- .../AcknowledgedRequestBuilder.java | 73 ---- .../clustermanager/AcknowledgedResponse.java | 149 ------- ...terManagerNodeOperationRequestBuilder.java | 29 +- .../ClusterManagerNodeRequest.java | 53 ++- .../ShardsAcknowledgedResponse.java | 117 ------ .../TransportClusterManagerNodeAction.java | 43 +- .../info/TransportClusterInfoAction.java | 20 +- .../support/master/AcknowledgedRequest.java | 62 ++- .../master/AcknowledgedRequestBuilder.java | 30 +- .../support/master/AcknowledgedResponse.java | 99 ++++- .../MasterNodeOperationRequestBuilder.java | 3 +- ...MasterNodeReadOperationRequestBuilder.java | 8 +- .../support/master/MasterNodeRequest.java | 2 + .../master/ShardsAcknowledgedResponse.java | 71 +++- .../master/TransportMasterNodeAction.java | 8 +- .../master/TransportMasterNodeReadAction.java | 3 +- .../info/ClusterInfoRequestBuilder.java | 1 - .../info/TransportClusterInfoAction.java | 9 + .../action/update/TransportUpdateAction.java | 2 +- .../opensearch/client/ClusterAdminClient.java | 2 +- .../opensearch/client/IndicesAdminClient.java | 2 +- .../client/support/AbstractClient.java | 2 +- .../cluster/LocalNodeMasterListener.java | 8 +- .../action/index/MappingUpdatedAction.java | 4 +- .../MetadataCreateDataStreamService.java | 2 +- .../MetadataIndexTemplateService.java | 6 +- .../metadata/TemplateUpgradeService.java | 6 +- .../settings/ConsistentSettingsService.java | 4 +- .../org/opensearch/index/engine/Engine.java | 5 +- .../index/engine/InternalEngine.java | 2 +- .../index/engine/LifecycleAware.java | 20 + .../opensearch/index/shard/IndexShard.java | 74 +++- .../org/opensearch/index/store/Store.java | 46 +++ .../translog/InternalTranslogManager.java | 322 +++++++++++++++ .../index/translog/NoOpTranslogManager.java | 110 ++++++ .../index/translog/TranslogManager.java | 108 +++++ .../translog/TranslogRecoveryRunner.java | 28 ++ .../translog/WriteOnlyTranslogManager.java | 69 ++++ .../CompositeTranslogEventListener.java | 110 ++++++ .../listener/TranslogEventListener.java | 50 +++ .../index/translog/listener/package-info.java | 11 + .../org/opensearch/indices/IndicesModule.java | 2 + .../indices/RunUnderPrimaryPermit.java | 72 ++++ .../indices/recovery/FileChunkWriter.java | 31 ++ .../recovery/RecoverySourceHandler.java | 259 ++---------- .../indices/recovery/RecoveryTarget.java | 18 - .../recovery/RecoveryTargetHandler.java | 14 +- .../recovery/RemoteRecoveryTargetHandler.java | 81 +--- .../recovery/RetryableTransportClient.java | 2 +- .../replication/GetSegmentFilesRequest.java | 4 + .../OngoingSegmentReplications.java | 230 +++++++++++ .../RemoteSegmentFileChunkWriter.java | 125 ++++++ .../SegmentFileTransferHandler.java | 239 +++++++++++ .../SegmentReplicationSourceHandler.java | 170 ++++++++ .../SegmentReplicationSourceService.java | 140 ++++--- .../replication/SegmentReplicationState.java | 51 ++- .../replication/SegmentReplicationTarget.java | 187 ++++++++- .../SegmentReplicationTargetService.java | 35 +- .../checkpoint/PublishCheckpointAction.java | 9 +- .../checkpoint/ReplicationCheckpoint.java | 2 +- ...SegmentReplicationCheckpointPublisher.java | 1 + .../indices/replication/common/CopyState.java | 16 +- .../common/ReplicationCollection.java | 10 + .../common/ReplicationFailedException.java | 41 ++ .../replication/common/ReplicationTarget.java | 17 +- .../SegmentReplicationTransportRequest.java | 25 ++ .../org/opensearch/ingest/IngestService.java | 2 +- .../main/java/org/opensearch/node/Node.java | 18 + .../CompletionPersistentTaskAction.java | 2 +- .../RemovePersistentTaskAction.java | 2 +- .../persistent/StartPersistentTaskAction.java | 2 +- .../UpdatePersistentTaskStatusAction.java | 2 +- .../org/opensearch/rest/BaseRestHandler.java | 2 +- .../cluster/RestCleanupRepositoryAction.java | 4 +- .../cluster/RestCloneSnapshotAction.java | 4 +- .../cluster/RestClusterGetSettingsAction.java | 4 +- .../cluster/RestClusterHealthAction.java | 4 +- .../cluster/RestClusterRerouteAction.java | 4 +- .../admin/cluster/RestClusterStateAction.java | 4 +- .../RestClusterUpdateSettingsAction.java | 4 +- .../cluster/RestCreateSnapshotAction.java | 4 +- .../cluster/RestDeleteRepositoryAction.java | 4 +- .../cluster/RestDeleteSnapshotAction.java | 4 +- .../cluster/RestDeleteStoredScriptAction.java | 4 +- .../cluster/RestGetRepositoriesAction.java | 4 +- .../admin/cluster/RestGetSnapshotsAction.java | 4 +- .../cluster/RestGetStoredScriptAction.java | 2 +- .../RestPendingClusterTasksAction.java | 4 +- .../cluster/RestPutRepositoryAction.java | 4 +- .../cluster/RestPutStoredScriptAction.java | 2 +- .../cluster/RestRestoreSnapshotAction.java | 4 +- .../cluster/RestSnapshotsStatusAction.java | 4 +- .../cluster/RestVerifyRepositoryAction.java | 4 +- .../RestDeleteDanglingIndexAction.java | 4 +- .../RestImportDanglingIndexAction.java | 4 +- .../indices/RestAddIndexBlockAction.java | 4 +- .../admin/indices/RestCloseIndexAction.java | 4 +- .../admin/indices/RestCreateIndexAction.java | 4 +- .../RestDeleteComponentTemplateAction.java | 2 +- ...stDeleteComposableIndexTemplateAction.java | 2 +- .../admin/indices/RestDeleteIndexAction.java | 4 +- .../RestDeleteIndexTemplateAction.java | 4 +- .../RestGetComponentTemplateAction.java | 2 +- .../RestGetComposableIndexTemplateAction.java | 2 +- .../indices/RestGetIndexTemplateAction.java | 4 +- .../admin/indices/RestGetIndicesAction.java | 4 +- .../admin/indices/RestGetMappingAction.java | 6 +- .../admin/indices/RestGetSettingsAction.java | 4 +- .../indices/RestIndexDeleteAliasesAction.java | 4 +- .../indices/RestIndexPutAliasAction.java | 4 +- .../indices/RestIndicesAliasesAction.java | 4 +- .../admin/indices/RestOpenIndexAction.java | 4 +- .../RestPutComponentTemplateAction.java | 2 +- .../RestPutComposableIndexTemplateAction.java | 2 +- .../indices/RestPutIndexTemplateAction.java | 2 +- .../admin/indices/RestPutMappingAction.java | 4 +- .../admin/indices/RestResizeHandler.java | 2 +- .../indices/RestRolloverIndexAction.java | 4 +- .../RestSimulateIndexTemplateAction.java | 4 +- .../indices/RestSimulateTemplateAction.java | 4 +- .../indices/RestUpdateSettingsAction.java | 4 +- .../rest/action/cat/RestAllocationAction.java | 4 +- .../rest/action/cat/RestIndicesAction.java | 12 +- .../rest/action/cat/RestMasterAction.java | 4 +- .../rest/action/cat/RestNodeAttrsAction.java | 4 +- .../rest/action/cat/RestNodesAction.java | 4 +- .../cat/RestPendingClusterTasksAction.java | 4 +- .../rest/action/cat/RestPluginsAction.java | 4 +- .../action/cat/RestRepositoriesAction.java | 4 +- .../rest/action/cat/RestSegmentsAction.java | 4 +- .../rest/action/cat/RestShardsAction.java | 4 +- .../rest/action/cat/RestSnapshotAction.java | 4 +- .../rest/action/cat/RestTemplatesAction.java | 4 +- .../rest/action/cat/RestThreadPoolAction.java | 4 +- .../ingest/RestDeletePipelineAction.java | 2 +- .../action/ingest/RestGetPipelineAction.java | 2 +- .../action/ingest/RestPutPipelineAction.java | 2 +- .../org/opensearch/script/ScriptService.java | 2 +- .../opensearch/snapshots/RestoreService.java | 2 +- .../snapshots/SnapshotsService.java | 10 +- ...UpdateIndexShardSnapshotStatusRequest.java | 2 +- .../ExceptionSerializationTests.java | 2 + .../reroute/ClusterRerouteRequestTests.java | 13 +- .../create/CreateSnapshotRequestTests.java | 4 +- .../restore/RestoreSnapshotRequestTests.java | 4 +- .../alias/IndicesAliasesRequestTests.java | 2 +- .../indices/close/CloseIndexRequestTests.java | 10 +- .../indices/get/GetIndexActionTests.java | 4 +- .../TransportRolloverActionTests.java | 4 +- .../settings/get/GetSettingsActionTests.java | 8 +- ...dateSettingsRequestSerializationTests.java | 8 +- .../AbstractSearchAsyncActionTests.java | 155 +++++++- .../ShardsAcknowledgedResponseTests.java | 1 + ...ransportClusterManagerNodeActionTests.java | 17 +- .../TransportMasterNodeActionUtils.java | 4 +- .../MetadataIndexTemplateServiceTests.java | 2 +- .../metadata/TemplateUpgradeServiceTests.java | 2 +- .../service/ClusterApplierServiceTests.java | 4 +- .../ConsistentSettingsServiceTests.java | 16 +- .../opensearch/index/store/StoreTests.java | 51 ++- .../InternalTranslogManagerTests.java | 279 +++++++++++++ .../translog/TranslogManagerTestCase.java | 217 ++++++++++ .../listener/TranslogListenerTests.java | 126 ++++++ .../recovery/RecoverySourceHandlerTests.java | 38 +- .../OngoingSegmentReplicationsTests.java | 231 +++++++++++ .../SegmentFileTransferHandlerTests.java | 251 ++++++++++++ .../SegmentReplicationSourceHandlerTests.java | 193 +++++++++ .../SegmentReplicationSourceServiceTests.java | 104 +++-- .../SegmentReplicationTargetServiceTests.java | 83 +++- .../SegmentReplicationTargetTests.java | 370 ++++++++++++++++++ .../PublishCheckpointActionTests.java | 17 +- .../replication/common/CopyStateTests.java | 10 +- .../InternalOrPrivateSettingsPlugin.java | 2 +- .../blobstore/BlobStoreRepositoryTests.java | 2 +- .../cluster/RestClusterHealthActionTests.java | 2 +- .../InternalTDigestPercentilesRanksTests.java | 3 +- .../InternalTDigestPercentilesTests.java | 3 +- .../TDigestPercentilesAggregatorTests.java | 14 +- .../snapshots/SnapshotResiliencyTests.java | 2 +- .../AbstractSnapshotIntegTestCase.java | 2 +- .../java/org/opensearch/test/TestCluster.java | 2 +- .../test/hamcrest/OpenSearchAssertions.java | 4 +- 411 files changed, 5792 insertions(+), 1638 deletions(-) delete mode 100644 plugins/discovery-ec2/licenses/aws-java-sdk-core-1.11.749.jar.sha1 create mode 100644 plugins/discovery-ec2/licenses/aws-java-sdk-core-1.12.247.jar.sha1 delete mode 100644 plugins/discovery-ec2/licenses/aws-java-sdk-ec2-1.11.749.jar.sha1 create mode 100644 plugins/discovery-ec2/licenses/aws-java-sdk-ec2-1.12.247.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/aws-java-sdk-core-1.11.749.jar.sha1 create mode 100644 plugins/repository-s3/licenses/aws-java-sdk-core-1.12.247.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/aws-java-sdk-s3-1.11.749.jar.sha1 create mode 100644 plugins/repository-s3/licenses/aws-java-sdk-s3-1.12.247.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/aws-java-sdk-sts-1.11.749.jar.sha1 create mode 100644 plugins/repository-s3/licenses/aws-java-sdk-sts-1.12.247.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/jmespath-java-1.11.749.jar.sha1 create mode 100644 plugins/repository-s3/licenses/jmespath-java-1.12.247.jar.sha1 delete mode 100644 server/licenses/t-digest-3.2.jar.sha1 create mode 100644 server/licenses/t-digest-3.3.jar.sha1 delete mode 100644 server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedRequest.java delete mode 100644 server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedRequestBuilder.java delete mode 100644 server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedResponse.java delete mode 100644 server/src/main/java/org/opensearch/action/support/clustermanager/ShardsAcknowledgedResponse.java create mode 100644 server/src/main/java/org/opensearch/index/engine/LifecycleAware.java create mode 100644 server/src/main/java/org/opensearch/index/translog/InternalTranslogManager.java create mode 100644 server/src/main/java/org/opensearch/index/translog/NoOpTranslogManager.java create mode 100644 server/src/main/java/org/opensearch/index/translog/TranslogManager.java create mode 100644 server/src/main/java/org/opensearch/index/translog/TranslogRecoveryRunner.java create mode 100644 server/src/main/java/org/opensearch/index/translog/WriteOnlyTranslogManager.java create mode 100644 server/src/main/java/org/opensearch/index/translog/listener/CompositeTranslogEventListener.java create mode 100644 server/src/main/java/org/opensearch/index/translog/listener/TranslogEventListener.java create mode 100644 server/src/main/java/org/opensearch/index/translog/listener/package-info.java create mode 100644 server/src/main/java/org/opensearch/indices/RunUnderPrimaryPermit.java create mode 100644 server/src/main/java/org/opensearch/indices/recovery/FileChunkWriter.java create mode 100644 server/src/main/java/org/opensearch/indices/replication/OngoingSegmentReplications.java create mode 100644 server/src/main/java/org/opensearch/indices/replication/RemoteSegmentFileChunkWriter.java create mode 100644 server/src/main/java/org/opensearch/indices/replication/SegmentFileTransferHandler.java create mode 100644 server/src/main/java/org/opensearch/indices/replication/SegmentReplicationSourceHandler.java create mode 100644 server/src/main/java/org/opensearch/indices/replication/common/ReplicationFailedException.java create mode 100644 server/src/test/java/org/opensearch/index/translog/InternalTranslogManagerTests.java create mode 100644 server/src/test/java/org/opensearch/index/translog/TranslogManagerTestCase.java create mode 100644 server/src/test/java/org/opensearch/index/translog/listener/TranslogListenerTests.java create mode 100644 server/src/test/java/org/opensearch/indices/replication/OngoingSegmentReplicationsTests.java create mode 100644 server/src/test/java/org/opensearch/indices/replication/SegmentFileTransferHandlerTests.java create mode 100644 server/src/test/java/org/opensearch/indices/replication/SegmentReplicationSourceHandlerTests.java create mode 100644 server/src/test/java/org/opensearch/indices/replication/SegmentReplicationTargetTests.java diff --git a/.ci/bwcVersions b/.ci/bwcVersions index f2d4aef40b4d9..72816119c62c3 100644 --- a/.ci/bwcVersions +++ b/.ci/bwcVersions @@ -45,3 +45,4 @@ BWC_VERSION: - "2.0.1" - "2.0.2" - "2.1.0" + - "2.2.0" diff --git a/buildSrc/src/main/java/org/opensearch/gradle/PublishPlugin.java b/buildSrc/src/main/java/org/opensearch/gradle/PublishPlugin.java index a28015784c4be..2bdef8e4cd244 100644 --- a/buildSrc/src/main/java/org/opensearch/gradle/PublishPlugin.java +++ b/buildSrc/src/main/java/org/opensearch/gradle/PublishPlugin.java @@ -36,6 +36,7 @@ import com.github.jengelman.gradle.plugins.shadow.ShadowExtension; import groovy.util.Node; import groovy.util.NodeList; + import org.opensearch.gradle.info.BuildParams; import org.opensearch.gradle.precommit.PomValidationPrecommitPlugin; import org.opensearch.gradle.util.Util; @@ -55,6 +56,9 @@ import org.gradle.api.tasks.bundling.Jar; import org.gradle.language.base.plugins.LifecycleBasePlugin; +import java.lang.invoke.MethodHandle; +import java.lang.invoke.MethodHandles; +import java.lang.invoke.MethodType; import java.util.concurrent.Callable; import static org.opensearch.gradle.util.GradleUtils.maybeConfigure; @@ -146,9 +150,49 @@ public String call() throws Exception { private static void addScmInfo(XmlProvider xml) { Node root = xml.asNode(); - root.appendNode("url", Util.urlFromOrigin(BuildParams.getGitOrigin())); - Node scmNode = root.appendNode("scm"); - scmNode.appendNode("url", BuildParams.getGitOrigin()); + Node url = null, scm = null; + + for (final Object child : root.children()) { + if (child instanceof Node) { + final Node node = (Node) child; + final Object name = node.name(); + + try { + // For Gradle 6.8 and below, the class is groovy.xml.QName + // For Gradle 7.4 and above, the class is groovy.namespace.QName + if (name != null && name.getClass().getSimpleName().equals("QName")) { + final MethodHandle handle = MethodHandles.publicLookup() + .findVirtual(name.getClass(), "matches", MethodType.methodType(boolean.class, Object.class)) + .bindTo(name); + + if ((boolean) handle.invoke("url")) { + url = node; + } else if ((boolean) handle.invoke("scm")) { + scm = node; + } + } + } catch (final Throwable ex) { + // Not a suitable QName type we could use ... + } + + if ("url".equals(name)) { + url = node; + } else if ("scm".equals(name)) { + scm = node; + } + } + } + + // Only include URL section if it is not provided in the POM already + if (url == null) { + root.appendNode("url", Util.urlFromOrigin(BuildParams.getGitOrigin())); + } + + // Only include SCM section if it is not provided in the POM already + if (scm == null) { + Node scmNode = root.appendNode("scm"); + scmNode.appendNode("url", BuildParams.getGitOrigin()); + } } /** Adds a javadocJar task to generate a jar containing javadocs. */ diff --git a/buildSrc/src/test/java/org/opensearch/gradle/pluginzip/PublishTests.java b/buildSrc/src/test/java/org/opensearch/gradle/pluginzip/PublishTests.java index 851c450699bd7..8c1314c4b4394 100644 --- a/buildSrc/src/test/java/org/opensearch/gradle/pluginzip/PublishTests.java +++ b/buildSrc/src/test/java/org/opensearch/gradle/pluginzip/PublishTests.java @@ -52,26 +52,62 @@ public void tearDown() { @Test public void testZipPublish() throws IOException, XmlPullParserException { - Project project = ProjectBuilder.builder().build(); String zipPublishTask = "publishPluginZipPublicationToZipStagingRepository"; - // Apply the opensearch.pluginzip plugin - project.getPluginManager().apply("opensearch.pluginzip"); - // Check if the plugin has been applied to the project - assertTrue(project.getPluginManager().hasPlugin("opensearch.pluginzip")); - // Check if the project has the task from class PublishToMavenRepository after plugin apply - assertNotNull(project.getTasks().withType(PublishToMavenRepository.class)); - // Create a mock bundlePlugin task - Zip task = project.getTasks().create("bundlePlugin", Zip.class); - Publish.configMaven(project); - // Check if the main task publishPluginZipPublicationToZipStagingRepository exists after plugin apply - assertTrue(project.getTasks().getNames().contains(zipPublishTask)); - assertNotNull("Task to generate: ", project.getTasks().getByName(zipPublishTask)); - // Run Gradle functional tests, but calling a build.gradle file, that resembles the plugin publish behavior + prepareProjectForPublishTask(zipPublishTask); + + // Generate the build.gradle file + String buildFileContent = "apply plugin: 'maven-publish' \n" + + "apply plugin: 'java' \n" + + "publishing {\n" + + " repositories {\n" + + " maven {\n" + + " url = 'local-staging-repo/'\n" + + " name = 'zipStaging'\n" + + " }\n" + + " }\n" + + " publications {\n" + + " pluginZip(MavenPublication) {\n" + + " groupId = 'org.opensearch.plugin' \n" + + " artifactId = 'sample-plugin' \n" + + " version = '2.0.0.0' \n" + + " artifact('sample-plugin.zip') \n" + + " }\n" + + " }\n" + + "}"; + writeString(projectDir.newFile("build.gradle"), buildFileContent); + // Execute the task publishPluginZipPublicationToZipStagingRepository + List allArguments = new ArrayList(); + allArguments.add("build"); + allArguments.add(zipPublishTask); + GradleRunner runner = GradleRunner.create(); + runner.forwardOutput(); + runner.withPluginClasspath(); + runner.withArguments(allArguments); + runner.withProjectDir(projectDir.getRoot()); + BuildResult result = runner.build(); + // Check if task publishMavenzipPublicationToZipstagingRepository has ran well + assertEquals(SUCCESS, result.task(":" + zipPublishTask).getOutcome()); + // check if the zip has been published to local staging repo + assertTrue( + new File(projectDir.getRoot(), "local-staging-repo/org/opensearch/plugin/sample-plugin/2.0.0.0/sample-plugin-2.0.0.0.zip") + .exists() + ); + assertEquals(SUCCESS, result.task(":" + "build").getOutcome()); + // Parse the maven file and validate the groupID to org.opensearch.plugin + MavenXpp3Reader reader = new MavenXpp3Reader(); + Model model = reader.read( + new FileReader( + new File(projectDir.getRoot(), "local-staging-repo/org/opensearch/plugin/sample-plugin/2.0.0.0/sample-plugin-2.0.0.0.pom") + ) + ); + assertEquals(model.getGroupId(), "org.opensearch.plugin"); + } + + @Test + public void testZipPublishWithPom() throws IOException, XmlPullParserException { + String zipPublishTask = "publishPluginZipPublicationToZipStagingRepository"; + Project project = prepareProjectForPublishTask(zipPublishTask); - // Create a sample plugin zip file - File sampleZip = new File(projectDir.getRoot(), "sample-plugin.zip"); - Files.createFile(sampleZip.toPath()); - writeString(projectDir.newFile("settings.gradle"), ""); // Generate the build.gradle file String buildFileContent = "apply plugin: 'maven-publish' \n" + "apply plugin: 'java' \n" @@ -88,6 +124,26 @@ public void testZipPublish() throws IOException, XmlPullParserException { + " artifactId = 'sample-plugin' \n" + " version = '2.0.0.0' \n" + " artifact('sample-plugin.zip') \n" + + " pom {\n" + + " name = 'sample-plugin'\n" + + " description = 'sample-description'\n" + + " licenses {\n" + + " license {\n" + + " name = \"The Apache License, Version 2.0\"\n" + + " url = \"http://www.apache.org/licenses/LICENSE-2.0.txt\"\n" + + " }\n" + + " }\n" + + " developers {\n" + + " developer {\n" + + " name = 'opensearch'\n" + + " url = 'https://github.com/opensearch-project/OpenSearch'\n" + + " }\n" + + " }\n" + + " url = 'https://github.com/opensearch-project/OpenSearch'\n" + + " scm {\n" + + " url = 'https://github.com/opensearch-project/OpenSearch'\n" + + " }\n" + + " }" + " }\n" + " }\n" + "}"; @@ -118,6 +174,32 @@ public void testZipPublish() throws IOException, XmlPullParserException { ) ); assertEquals(model.getGroupId(), "org.opensearch.plugin"); + assertEquals(model.getUrl(), "https://github.com/opensearch-project/OpenSearch"); + } + + protected Project prepareProjectForPublishTask(String zipPublishTask) throws IOException { + Project project = ProjectBuilder.builder().build(); + + // Apply the opensearch.pluginzip plugin + project.getPluginManager().apply("opensearch.pluginzip"); + // Check if the plugin has been applied to the project + assertTrue(project.getPluginManager().hasPlugin("opensearch.pluginzip")); + // Check if the project has the task from class PublishToMavenRepository after plugin apply + assertNotNull(project.getTasks().withType(PublishToMavenRepository.class)); + // Create a mock bundlePlugin task + Zip task = project.getTasks().create("bundlePlugin", Zip.class); + Publish.configMaven(project); + // Check if the main task publishPluginZipPublicationToZipStagingRepository exists after plugin apply + assertTrue(project.getTasks().getNames().contains(zipPublishTask)); + assertNotNull("Task to generate: ", project.getTasks().getByName(zipPublishTask)); + // Run Gradle functional tests, but calling a build.gradle file, that resembles the plugin publish behavior + + // Create a sample plugin zip file + File sampleZip = new File(projectDir.getRoot(), "sample-plugin.zip"); + Files.createFile(sampleZip.toPath()); + writeString(projectDir.newFile("settings.gradle"), ""); + + return project; } private void writeString(File file, String string) throws IOException { diff --git a/buildSrc/version.properties b/buildSrc/version.properties index 87dbad73229b4..2a7f8aeaae705 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -31,6 +31,9 @@ httpasyncclient = 4.1.4 commonslogging = 1.2 commonscodec = 1.13 +# plugin dependencies +aws = 1.12.247 + # when updating this version, you need to ensure compatibility with: # - plugins/ingest-attachment (transitive dependency, check the upstream POM) # - distribution/tools/plugin-cli diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/ClusterClient.java b/client/rest-high-level/src/main/java/org/opensearch/client/ClusterClient.java index 1c943ec24411a..10cfec9497862 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/ClusterClient.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/ClusterClient.java @@ -39,7 +39,7 @@ import org.opensearch.action.admin.cluster.settings.ClusterGetSettingsResponse; import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest; import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.cluster.RemoteInfoRequest; import org.opensearch.client.cluster.RemoteInfoResponse; import org.opensearch.client.indices.ComponentTemplatesExistRequest; diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/ClusterRequestConverters.java b/client/rest-high-level/src/main/java/org/opensearch/client/ClusterRequestConverters.java index da90521512dea..5c3e403e4ce98 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/ClusterRequestConverters.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/ClusterRequestConverters.java @@ -58,7 +58,7 @@ static Request clusterPutSettings(ClusterUpdateSettingsRequest clusterUpdateSett RequestConverters.Params parameters = new RequestConverters.Params(); parameters.withTimeout(clusterUpdateSettingsRequest.timeout()); - parameters.withClusterManagerTimeout(clusterUpdateSettingsRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(clusterUpdateSettingsRequest.clusterManagerNodeTimeout()); request.addParameters(parameters.asMap()); request.setEntity(RequestConverters.createEntity(clusterUpdateSettingsRequest, RequestConverters.REQUEST_BODY_CONTENT_TYPE)); return request; @@ -69,7 +69,7 @@ static Request clusterGetSettings(ClusterGetSettingsRequest clusterGetSettingsRe RequestConverters.Params parameters = new RequestConverters.Params(); parameters.withLocal(clusterGetSettingsRequest.local()); parameters.withIncludeDefaults(clusterGetSettingsRequest.includeDefaults()); - parameters.withClusterManagerTimeout(clusterGetSettingsRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(clusterGetSettingsRequest.clusterManagerNodeTimeout()); request.addParameters(parameters.asMap()); return request; } @@ -88,7 +88,7 @@ static Request clusterHealth(ClusterHealthRequest healthRequest) { .withWaitForNodes(healthRequest.waitForNodes()) .withWaitForEvents(healthRequest.waitForEvents()) .withTimeout(healthRequest.timeout()) - .withClusterManagerTimeout(healthRequest.masterNodeTimeout()) + .withClusterManagerTimeout(healthRequest.clusterManagerNodeTimeout()) .withLocal(healthRequest.local()) .withLevel(healthRequest.level()); request.addParameters(params.asMap()); diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/IndicesClient.java b/client/rest-high-level/src/main/java/org/opensearch/client/IndicesClient.java index 2a1d471e73eb5..9b4586ec6bf89 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/IndicesClient.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/IndicesClient.java @@ -52,7 +52,7 @@ import org.opensearch.action.admin.indices.template.delete.DeleteIndexTemplateRequest; import org.opensearch.action.admin.indices.validate.query.ValidateQueryRequest; import org.opensearch.action.admin.indices.validate.query.ValidateQueryResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.indices.AnalyzeRequest; import org.opensearch.client.indices.AnalyzeResponse; import org.opensearch.client.indices.CloseIndexRequest; diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/IndicesRequestConverters.java b/client/rest-high-level/src/main/java/org/opensearch/client/IndicesRequestConverters.java index 4bd2f57e6b998..9508faf14c898 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/IndicesRequestConverters.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/IndicesRequestConverters.java @@ -119,7 +119,7 @@ static Request deleteIndex(DeleteIndexRequest deleteIndexRequest) { RequestConverters.Params parameters = new RequestConverters.Params(); parameters.withTimeout(deleteIndexRequest.timeout()); - parameters.withClusterManagerTimeout(deleteIndexRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(deleteIndexRequest.clusterManagerNodeTimeout()); parameters.withIndicesOptions(deleteIndexRequest.indicesOptions()); request.addParameters(parameters.asMap()); return request; @@ -131,7 +131,7 @@ static Request openIndex(OpenIndexRequest openIndexRequest) { RequestConverters.Params parameters = new RequestConverters.Params(); parameters.withTimeout(openIndexRequest.timeout()); - parameters.withClusterManagerTimeout(openIndexRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(openIndexRequest.clusterManagerNodeTimeout()); parameters.withWaitForActiveShards(openIndexRequest.waitForActiveShards()); parameters.withIndicesOptions(openIndexRequest.indicesOptions()); request.addParameters(parameters.asMap()); @@ -168,7 +168,7 @@ static Request updateAliases(IndicesAliasesRequest indicesAliasesRequest) throws RequestConverters.Params parameters = new RequestConverters.Params(); parameters.withTimeout(indicesAliasesRequest.timeout()); - parameters.withClusterManagerTimeout(indicesAliasesRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(indicesAliasesRequest.clusterManagerNodeTimeout()); request.addParameters(parameters.asMap()); request.setEntity(RequestConverters.createEntity(indicesAliasesRequest, RequestConverters.REQUEST_BODY_CONTENT_TYPE)); return request; @@ -349,7 +349,7 @@ private static Request resize(org.opensearch.action.admin.indices.shrink.ResizeR RequestConverters.Params params = new RequestConverters.Params(); params.withTimeout(resizeRequest.timeout()); - params.withClusterManagerTimeout(resizeRequest.masterNodeTimeout()); + params.withClusterManagerTimeout(resizeRequest.clusterManagerNodeTimeout()); params.withWaitForActiveShards(resizeRequest.getTargetIndexRequest().waitForActiveShards()); request.addParameters(params.asMap()); request.setEntity(RequestConverters.createEntity(resizeRequest, RequestConverters.REQUEST_BODY_CONTENT_TYPE)); @@ -386,7 +386,7 @@ static Request getSettings(GetSettingsRequest getSettingsRequest) { params.withIndicesOptions(getSettingsRequest.indicesOptions()); params.withLocal(getSettingsRequest.local()); params.withIncludeDefaults(getSettingsRequest.includeDefaults()); - params.withClusterManagerTimeout(getSettingsRequest.masterNodeTimeout()); + params.withClusterManagerTimeout(getSettingsRequest.clusterManagerNodeTimeout()); request.addParameters(params.asMap()); return request; } @@ -429,7 +429,7 @@ static Request indexPutSettings(UpdateSettingsRequest updateSettingsRequest) thr RequestConverters.Params parameters = new RequestConverters.Params(); parameters.withTimeout(updateSettingsRequest.timeout()); - parameters.withClusterManagerTimeout(updateSettingsRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(updateSettingsRequest.clusterManagerNodeTimeout()); parameters.withIndicesOptions(updateSettingsRequest.indicesOptions()); parameters.withPreserveExisting(updateSettingsRequest.isPreserveExisting()); request.addParameters(parameters.asMap()); @@ -443,7 +443,7 @@ static Request putTemplate(PutIndexTemplateRequest putIndexTemplateRequest) thro .build(); Request request = new Request(HttpPut.METHOD_NAME, endpoint); RequestConverters.Params params = new RequestConverters.Params(); - params.withClusterManagerTimeout(putIndexTemplateRequest.masterNodeTimeout()); + params.withClusterManagerTimeout(putIndexTemplateRequest.clusterManagerNodeTimeout()); if (putIndexTemplateRequest.create()) { params.putParam("create", Boolean.TRUE.toString()); } @@ -587,7 +587,7 @@ static Request deleteTemplate(DeleteIndexTemplateRequest deleteIndexTemplateRequ String endpoint = new RequestConverters.EndpointBuilder().addPathPartAsIs("_template").addPathPart(name).build(); Request request = new Request(HttpDelete.METHOD_NAME, endpoint); RequestConverters.Params params = new RequestConverters.Params(); - params.withClusterManagerTimeout(deleteIndexTemplateRequest.masterNodeTimeout()); + params.withClusterManagerTimeout(deleteIndexTemplateRequest.clusterManagerNodeTimeout()); request.addParameters(params.asMap()); return request; } diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/IngestClient.java b/client/rest-high-level/src/main/java/org/opensearch/client/IngestClient.java index 512d0eb09ed84..cd304019e771c 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/IngestClient.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/IngestClient.java @@ -39,7 +39,7 @@ import org.opensearch.action.ingest.PutPipelineRequest; import org.opensearch.action.ingest.SimulatePipelineRequest; import org.opensearch.action.ingest.SimulatePipelineResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import java.io.IOException; import java.util.Collections; diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/IngestRequestConverters.java b/client/rest-high-level/src/main/java/org/opensearch/client/IngestRequestConverters.java index 829f6cf0bbba4..2504dec3af36e 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/IngestRequestConverters.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/IngestRequestConverters.java @@ -54,7 +54,7 @@ static Request getPipeline(GetPipelineRequest getPipelineRequest) { Request request = new Request(HttpGet.METHOD_NAME, endpoint); RequestConverters.Params parameters = new RequestConverters.Params(); - parameters.withClusterManagerTimeout(getPipelineRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(getPipelineRequest.clusterManagerNodeTimeout()); request.addParameters(parameters.asMap()); return request; } @@ -67,7 +67,7 @@ static Request putPipeline(PutPipelineRequest putPipelineRequest) throws IOExcep RequestConverters.Params parameters = new RequestConverters.Params(); parameters.withTimeout(putPipelineRequest.timeout()); - parameters.withClusterManagerTimeout(putPipelineRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(putPipelineRequest.clusterManagerNodeTimeout()); request.addParameters(parameters.asMap()); request.setEntity(RequestConverters.createEntity(putPipelineRequest, RequestConverters.REQUEST_BODY_CONTENT_TYPE)); return request; @@ -81,7 +81,7 @@ static Request deletePipeline(DeletePipelineRequest deletePipelineRequest) { RequestConverters.Params parameters = new RequestConverters.Params(); parameters.withTimeout(deletePipelineRequest.timeout()); - parameters.withClusterManagerTimeout(deletePipelineRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(deletePipelineRequest.clusterManagerNodeTimeout()); request.addParameters(parameters.asMap()); return request; } diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/RequestConverters.java b/client/rest-high-level/src/main/java/org/opensearch/client/RequestConverters.java index 277759c921fbf..fb04f18a5b864 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/RequestConverters.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/RequestConverters.java @@ -721,7 +721,7 @@ static Request putScript(PutStoredScriptRequest putStoredScriptRequest) throws I Request request = new Request(HttpPost.METHOD_NAME, endpoint); Params params = new Params(); params.withTimeout(putStoredScriptRequest.timeout()); - params.withClusterManagerTimeout(putStoredScriptRequest.masterNodeTimeout()); + params.withClusterManagerTimeout(putStoredScriptRequest.clusterManagerNodeTimeout()); if (Strings.hasText(putStoredScriptRequest.context())) { params.putParam("context", putStoredScriptRequest.context()); } @@ -776,7 +776,7 @@ static Request getScript(GetStoredScriptRequest getStoredScriptRequest) { String endpoint = new EndpointBuilder().addPathPartAsIs("_scripts").addPathPart(getStoredScriptRequest.id()).build(); Request request = new Request(HttpGet.METHOD_NAME, endpoint); Params params = new Params(); - params.withClusterManagerTimeout(getStoredScriptRequest.masterNodeTimeout()); + params.withClusterManagerTimeout(getStoredScriptRequest.clusterManagerNodeTimeout()); request.addParameters(params.asMap()); return request; } @@ -786,7 +786,7 @@ static Request deleteScript(DeleteStoredScriptRequest deleteStoredScriptRequest) Request request = new Request(HttpDelete.METHOD_NAME, endpoint); Params params = new Params(); params.withTimeout(deleteStoredScriptRequest.timeout()); - params.withClusterManagerTimeout(deleteStoredScriptRequest.masterNodeTimeout()); + params.withClusterManagerTimeout(deleteStoredScriptRequest.clusterManagerNodeTimeout()); request.addParameters(params.asMap()); return request; } diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/RestHighLevelClient.java b/client/rest-high-level/src/main/java/org/opensearch/client/RestHighLevelClient.java index 50864ed829944..f3360630a26b7 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/RestHighLevelClient.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/RestHighLevelClient.java @@ -66,7 +66,7 @@ import org.opensearch.action.search.SearchRequest; import org.opensearch.action.search.SearchResponse; import org.opensearch.action.search.SearchScrollRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.update.UpdateRequest; import org.opensearch.action.update.UpdateResponse; import org.opensearch.client.core.CountRequest; diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/SnapshotClient.java b/client/rest-high-level/src/main/java/org/opensearch/client/SnapshotClient.java index 78c140dc8f4d4..85a793dec24ce 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/SnapshotClient.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/SnapshotClient.java @@ -51,7 +51,7 @@ import org.opensearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse; import org.opensearch.action.admin.cluster.snapshots.status.SnapshotsStatusRequest; import org.opensearch.action.admin.cluster.snapshots.status.SnapshotsStatusResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import java.io.IOException; diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/SnapshotRequestConverters.java b/client/rest-high-level/src/main/java/org/opensearch/client/SnapshotRequestConverters.java index 3b2c72266a30b..3d44820966608 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/SnapshotRequestConverters.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/SnapshotRequestConverters.java @@ -63,7 +63,7 @@ static Request getRepositories(GetRepositoriesRequest getRepositoriesRequest) { Request request = new Request(HttpGet.METHOD_NAME, endpoint); RequestConverters.Params parameters = new RequestConverters.Params(); - parameters.withClusterManagerTimeout(getRepositoriesRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(getRepositoriesRequest.clusterManagerNodeTimeout()); parameters.withLocal(getRepositoriesRequest.local()); request.addParameters(parameters.asMap()); return request; @@ -74,7 +74,7 @@ static Request createRepository(PutRepositoryRequest putRepositoryRequest) throw Request request = new Request(HttpPut.METHOD_NAME, endpoint); RequestConverters.Params parameters = new RequestConverters.Params(); - parameters.withClusterManagerTimeout(putRepositoryRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(putRepositoryRequest.clusterManagerNodeTimeout()); parameters.withTimeout(putRepositoryRequest.timeout()); if (putRepositoryRequest.verify() == false) { parameters.putParam("verify", "false"); @@ -91,7 +91,7 @@ static Request deleteRepository(DeleteRepositoryRequest deleteRepositoryRequest) Request request = new Request(HttpDelete.METHOD_NAME, endpoint); RequestConverters.Params parameters = new RequestConverters.Params(); - parameters.withClusterManagerTimeout(deleteRepositoryRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(deleteRepositoryRequest.clusterManagerNodeTimeout()); parameters.withTimeout(deleteRepositoryRequest.timeout()); request.addParameters(parameters.asMap()); return request; @@ -105,7 +105,7 @@ static Request verifyRepository(VerifyRepositoryRequest verifyRepositoryRequest) Request request = new Request(HttpPost.METHOD_NAME, endpoint); RequestConverters.Params parameters = new RequestConverters.Params(); - parameters.withClusterManagerTimeout(verifyRepositoryRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(verifyRepositoryRequest.clusterManagerNodeTimeout()); parameters.withTimeout(verifyRepositoryRequest.timeout()); request.addParameters(parameters.asMap()); return request; @@ -119,7 +119,7 @@ static Request cleanupRepository(CleanupRepositoryRequest cleanupRepositoryReque Request request = new Request(HttpPost.METHOD_NAME, endpoint); RequestConverters.Params parameters = new RequestConverters.Params(); - parameters.withClusterManagerTimeout(cleanupRepositoryRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(cleanupRepositoryRequest.clusterManagerNodeTimeout()); parameters.withTimeout(cleanupRepositoryRequest.timeout()); request.addParameters(parameters.asMap()); return request; @@ -132,7 +132,7 @@ static Request createSnapshot(CreateSnapshotRequest createSnapshotRequest) throw .build(); Request request = new Request(HttpPut.METHOD_NAME, endpoint); RequestConverters.Params params = new RequestConverters.Params(); - params.withClusterManagerTimeout(createSnapshotRequest.masterNodeTimeout()); + params.withClusterManagerTimeout(createSnapshotRequest.clusterManagerNodeTimeout()); params.withWaitForCompletion(createSnapshotRequest.waitForCompletion()); request.addParameters(params.asMap()); request.setEntity(RequestConverters.createEntity(createSnapshotRequest, RequestConverters.REQUEST_BODY_CONTENT_TYPE)); @@ -148,7 +148,7 @@ static Request cloneSnapshot(CloneSnapshotRequest cloneSnapshotRequest) throws I .build(); Request request = new Request(HttpPut.METHOD_NAME, endpoint); RequestConverters.Params params = new RequestConverters.Params(); - params.withClusterManagerTimeout(cloneSnapshotRequest.masterNodeTimeout()); + params.withClusterManagerTimeout(cloneSnapshotRequest.clusterManagerNodeTimeout()); request.addParameters(params.asMap()); request.setEntity(RequestConverters.createEntity(cloneSnapshotRequest, RequestConverters.REQUEST_BODY_CONTENT_TYPE)); return request; @@ -167,7 +167,7 @@ static Request getSnapshots(GetSnapshotsRequest getSnapshotsRequest) { Request request = new Request(HttpGet.METHOD_NAME, endpoint); RequestConverters.Params parameters = new RequestConverters.Params(); - parameters.withClusterManagerTimeout(getSnapshotsRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(getSnapshotsRequest.clusterManagerNodeTimeout()); parameters.putParam("ignore_unavailable", Boolean.toString(getSnapshotsRequest.ignoreUnavailable())); parameters.putParam("verbose", Boolean.toString(getSnapshotsRequest.verbose())); request.addParameters(parameters.asMap()); @@ -183,7 +183,7 @@ static Request snapshotsStatus(SnapshotsStatusRequest snapshotsStatusRequest) { Request request = new Request(HttpGet.METHOD_NAME, endpoint); RequestConverters.Params parameters = new RequestConverters.Params(); - parameters.withClusterManagerTimeout(snapshotsStatusRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(snapshotsStatusRequest.clusterManagerNodeTimeout()); parameters.withIgnoreUnavailable(snapshotsStatusRequest.ignoreUnavailable()); request.addParameters(parameters.asMap()); return request; @@ -197,7 +197,7 @@ static Request restoreSnapshot(RestoreSnapshotRequest restoreSnapshotRequest) th .build(); Request request = new Request(HttpPost.METHOD_NAME, endpoint); RequestConverters.Params parameters = new RequestConverters.Params(); - parameters.withClusterManagerTimeout(restoreSnapshotRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(restoreSnapshotRequest.clusterManagerNodeTimeout()); parameters.withWaitForCompletion(restoreSnapshotRequest.waitForCompletion()); request.addParameters(parameters.asMap()); request.setEntity(RequestConverters.createEntity(restoreSnapshotRequest, RequestConverters.REQUEST_BODY_CONTENT_TYPE)); @@ -212,7 +212,7 @@ static Request deleteSnapshot(DeleteSnapshotRequest deleteSnapshotRequest) { Request request = new Request(HttpDelete.METHOD_NAME, endpoint); RequestConverters.Params parameters = new RequestConverters.Params(); - parameters.withClusterManagerTimeout(deleteSnapshotRequest.masterNodeTimeout()); + parameters.withClusterManagerTimeout(deleteSnapshotRequest.clusterManagerNodeTimeout()); request.addParameters(parameters.asMap()); return request; } diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/indices/CloseIndexResponse.java b/client/rest-high-level/src/main/java/org/opensearch/client/indices/CloseIndexResponse.java index 3740f4f3fc5ab..817d1c08532c6 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/indices/CloseIndexResponse.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/indices/CloseIndexResponse.java @@ -33,7 +33,7 @@ import org.opensearch.OpenSearchException; import org.opensearch.action.support.DefaultShardOperationFailedException; -import org.opensearch.action.support.clustermanager.ShardsAcknowledgedResponse; +import org.opensearch.action.support.master.ShardsAcknowledgedResponse; import org.opensearch.common.Nullable; import org.opensearch.common.ParseField; import org.opensearch.common.xcontent.ConstructingObjectParser; diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/indices/CreateIndexResponse.java b/client/rest-high-level/src/main/java/org/opensearch/client/indices/CreateIndexResponse.java index b7a94eb5ea8b8..7e1ea2894961d 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/indices/CreateIndexResponse.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/indices/CreateIndexResponse.java @@ -32,7 +32,7 @@ package org.opensearch.client.indices; -import org.opensearch.action.support.clustermanager.ShardsAcknowledgedResponse; +import org.opensearch.action.support.master.ShardsAcknowledgedResponse; import org.opensearch.common.ParseField; import org.opensearch.common.xcontent.ConstructingObjectParser; import org.opensearch.common.xcontent.ObjectParser; diff --git a/client/rest-high-level/src/main/java/org/opensearch/client/indices/rollover/RolloverResponse.java b/client/rest-high-level/src/main/java/org/opensearch/client/indices/rollover/RolloverResponse.java index 415f3dbec249f..0303dba2535e7 100644 --- a/client/rest-high-level/src/main/java/org/opensearch/client/indices/rollover/RolloverResponse.java +++ b/client/rest-high-level/src/main/java/org/opensearch/client/indices/rollover/RolloverResponse.java @@ -32,7 +32,7 @@ package org.opensearch.client.indices.rollover; -import org.opensearch.action.support.clustermanager.ShardsAcknowledgedResponse; +import org.opensearch.action.support.master.ShardsAcknowledgedResponse; import org.opensearch.common.ParseField; import org.opensearch.common.xcontent.ConstructingObjectParser; import org.opensearch.common.xcontent.XContentParser; diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/ClusterClientIT.java b/client/rest-high-level/src/test/java/org/opensearch/client/ClusterClientIT.java index 40059af46774f..71b869fb59e7b 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/ClusterClientIT.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/ClusterClientIT.java @@ -41,7 +41,7 @@ import org.opensearch.action.admin.cluster.settings.ClusterGetSettingsResponse; import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest; import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.cluster.RemoteConnectionInfo; import org.opensearch.client.cluster.RemoteInfoRequest; import org.opensearch.client.cluster.RemoteInfoResponse; diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/ClusterRequestConvertersTests.java b/client/rest-high-level/src/test/java/org/opensearch/client/ClusterRequestConvertersTests.java index e1c232103b207..27adc18fd37b8 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/ClusterRequestConvertersTests.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/ClusterRequestConvertersTests.java @@ -38,7 +38,7 @@ import org.opensearch.action.admin.cluster.settings.ClusterGetSettingsRequest; import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest; import org.opensearch.action.support.ActiveShardCount; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.client.cluster.RemoteInfoRequest; import org.opensearch.cluster.health.ClusterHealthStatus; import org.opensearch.common.Priority; @@ -101,13 +101,13 @@ public void testClusterHealth() { break; case "clusterManagerTimeout": expectedParams.put("timeout", "30s"); - healthRequest.masterNodeTimeout(clusterManagerTimeout); + healthRequest.clusterManagerNodeTimeout(clusterManagerTimeout); expectedParams.put("cluster_manager_timeout", clusterManagerTimeout); break; case "both": healthRequest.timeout(timeout); expectedParams.put("timeout", timeout); - healthRequest.masterNodeTimeout(timeout); + healthRequest.clusterManagerNodeTimeout(timeout); expectedParams.put("cluster_manager_timeout", timeout); break; case "none": diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/IndicesClientIT.java b/client/rest-high-level/src/test/java/org/opensearch/client/IndicesClientIT.java index aa7af5a9d1250..f9c8851f8839e 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/IndicesClientIT.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/IndicesClientIT.java @@ -65,7 +65,7 @@ import org.opensearch.action.support.IndicesOptions; import org.opensearch.action.support.WriteRequest; import org.opensearch.action.support.broadcast.BroadcastResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.indices.AnalyzeRequest; import org.opensearch.client.indices.AnalyzeResponse; import org.opensearch.client.indices.CloseIndexRequest; diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/IndicesRequestConvertersTests.java b/client/rest-high-level/src/test/java/org/opensearch/client/IndicesRequestConvertersTests.java index a277e65d2ac33..bf6d6c922fdd7 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/IndicesRequestConvertersTests.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/IndicesRequestConvertersTests.java @@ -53,7 +53,7 @@ import org.opensearch.action.admin.indices.shrink.ResizeType; import org.opensearch.action.admin.indices.template.delete.DeleteIndexTemplateRequest; import org.opensearch.action.admin.indices.validate.query.ValidateQueryRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.client.indices.AnalyzeRequest; import org.opensearch.client.indices.CloseIndexRequest; import org.opensearch.client.indices.CreateDataStreamRequest; diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/IngestClientIT.java b/client/rest-high-level/src/test/java/org/opensearch/client/IngestClientIT.java index e85ddc21b8fda..78a3202f35892 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/IngestClientIT.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/IngestClientIT.java @@ -41,7 +41,7 @@ import org.opensearch.action.ingest.SimulateDocumentVerboseResult; import org.opensearch.action.ingest.SimulatePipelineRequest; import org.opensearch.action.ingest.SimulatePipelineResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.bytes.BytesReference; import org.opensearch.common.xcontent.XContentBuilder; import org.opensearch.common.xcontent.XContentType; diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/IngestRequestConvertersTests.java b/client/rest-high-level/src/test/java/org/opensearch/client/IngestRequestConvertersTests.java index c65fa95c5e92a..200069ade1ea2 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/IngestRequestConvertersTests.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/IngestRequestConvertersTests.java @@ -40,7 +40,7 @@ import org.opensearch.action.ingest.GetPipelineRequest; import org.opensearch.action.ingest.PutPipelineRequest; import org.opensearch.action.ingest.SimulatePipelineRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.bytes.BytesArray; import org.opensearch.common.xcontent.XContentType; import org.opensearch.test.OpenSearchTestCase; diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/RequestConvertersTests.java b/client/rest-high-level/src/test/java/org/opensearch/client/RequestConvertersTests.java index 1f1b4543cf704..d70d72ff35b16 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/RequestConvertersTests.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/RequestConvertersTests.java @@ -61,7 +61,7 @@ import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.IndicesOptions; import org.opensearch.action.support.WriteRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; import org.opensearch.action.support.replication.ReplicationRequest; import org.opensearch.action.update.UpdateRequest; @@ -2129,7 +2129,7 @@ static void setRandomTimeoutTimeValue(Consumer setter, TimeValue defa } static void setRandomClusterManagerTimeout(ClusterManagerNodeRequest request, Map expectedParams) { - setRandomClusterManagerTimeout(request::masterNodeTimeout, expectedParams); + setRandomClusterManagerTimeout(request::clusterManagerNodeTimeout, expectedParams); } static void setRandomClusterManagerTimeout(TimedRequest request, Map expectedParams) { @@ -2145,7 +2145,7 @@ static void setRandomClusterManagerTimeout(Consumer setter, Map { @Override - protected org.opensearch.action.support.clustermanager.AcknowledgedResponse createServerTestInstance(XContentType xContentType) { - return new org.opensearch.action.support.clustermanager.AcknowledgedResponse(randomBoolean()); + protected org.opensearch.action.support.master.AcknowledgedResponse createServerTestInstance(XContentType xContentType) { + return new org.opensearch.action.support.master.AcknowledgedResponse(randomBoolean()); } @Override @@ -55,7 +55,7 @@ protected AcknowledgedResponse doParseToClientInstance(XContentParser parser) th @Override protected void assertInstances( - org.opensearch.action.support.clustermanager.AcknowledgedResponse serverTestInstance, + org.opensearch.action.support.master.AcknowledgedResponse serverTestInstance, AcknowledgedResponse clientInstance ) { assertThat(clientInstance.isAcknowledged(), is(serverTestInstance.isAcknowledged())); diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/documentation/ClusterClientDocumentationIT.java b/client/rest-high-level/src/test/java/org/opensearch/client/documentation/ClusterClientDocumentationIT.java index f75c6a10a8afe..baebd12e22a99 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/documentation/ClusterClientDocumentationIT.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/documentation/ClusterClientDocumentationIT.java @@ -41,7 +41,7 @@ import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest; import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsResponse; import org.opensearch.action.support.ActiveShardCount; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchRestHighLevelClientTestCase; import org.opensearch.client.RequestOptions; import org.opensearch.client.RestHighLevelClient; @@ -147,8 +147,8 @@ public void testClusterPutSettings() throws IOException { request.timeout("2m"); // <2> // end::put-settings-request-timeout // tag::put-settings-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::put-settings-request-masterTimeout // tag::put-settings-execute @@ -222,8 +222,8 @@ public void testClusterGetSettings() throws IOException { // end::get-settings-request-local // tag::get-settings-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::get-settings-request-masterTimeout // tag::get-settings-execute @@ -299,8 +299,8 @@ public void testClusterHealth() throws IOException { // end::health-request-timeout // tag::health-request-master-timeout - request.masterNodeTimeout(TimeValue.timeValueSeconds(20)); // <1> - request.masterNodeTimeout("20s"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueSeconds(20)); // <1> + request.clusterManagerNodeTimeout("20s"); // <2> // end::health-request-master-timeout // tag::health-request-wait-status diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/documentation/IndicesClientDocumentationIT.java b/client/rest-high-level/src/test/java/org/opensearch/client/documentation/IndicesClientDocumentationIT.java index 9e6bdd8d769a6..85c5d622f6f60 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/documentation/IndicesClientDocumentationIT.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/documentation/IndicesClientDocumentationIT.java @@ -63,7 +63,7 @@ import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.DefaultShardOperationFailedException; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchRestHighLevelClientTestCase; import org.opensearch.client.GetAliasesResponse; import org.opensearch.client.RequestOptions; @@ -235,8 +235,8 @@ public void testDeleteIndex() throws IOException { request.timeout("2m"); // <2> // end::delete-index-request-timeout // tag::delete-index-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::delete-index-request-masterTimeout // tag::delete-index-request-indicesOptions request.indicesOptions(IndicesOptions.lenientExpandOpen()); // <1> @@ -801,8 +801,8 @@ public void testOpenIndex() throws Exception { request.timeout("2m"); // <2> // end::open-index-request-timeout // tag::open-index-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::open-index-request-masterTimeout // tag::open-index-request-waitForActiveShards request.waitForActiveShards(2); // <1> @@ -1530,8 +1530,8 @@ public void testUpdateAliases() throws Exception { request.timeout("2m"); // <2> // end::update-aliases-request-timeout // tag::update-aliases-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::update-aliases-request-masterTimeout // tag::update-aliases-execute @@ -1600,8 +1600,8 @@ public void testShrinkIndex() throws Exception { request.timeout("2m"); // <2> // end::shrink-index-request-timeout // tag::shrink-index-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::shrink-index-request-masterTimeout // tag::shrink-index-request-waitForActiveShards request.setWaitForActiveShards(2); // <1> @@ -1677,8 +1677,8 @@ public void testSplitIndex() throws Exception { request.timeout("2m"); // <2> // end::split-index-request-timeout // tag::split-index-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::split-index-request-masterTimeout // tag::split-index-request-waitForActiveShards request.setWaitForActiveShards(2); // <1> @@ -1746,8 +1746,8 @@ public void testCloneIndex() throws Exception { request.timeout("2m"); // <2> // end::clone-index-request-timeout // tag::clone-index-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::clone-index-request-masterTimeout // tag::clone-index-request-waitForActiveShards request.setWaitForActiveShards(2); // <1> @@ -2021,8 +2021,8 @@ public void testIndexPutSettings() throws Exception { request.timeout("2m"); // <2> // end::indices-put-settings-request-timeout // tag::indices-put-settings-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::indices-put-settings-request-masterTimeout // tag::indices-put-settings-request-indicesOptions request.indicesOptions(IndicesOptions.lenientExpandOpen()); // <1> @@ -2173,8 +2173,8 @@ public void testPutTemplate() throws Exception { // end::put-template-request-create // tag::put-template-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::put-template-request-masterTimeout request.create(false); // make test happy @@ -2897,8 +2897,8 @@ public void testDeleteTemplate() throws Exception { // end::delete-template-request // tag::delete-template-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::delete-template-request-masterTimeout // tag::delete-template-execute diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/documentation/IngestClientDocumentationIT.java b/client/rest-high-level/src/test/java/org/opensearch/client/documentation/IngestClientDocumentationIT.java index 46417659eddad..5654791347832 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/documentation/IngestClientDocumentationIT.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/documentation/IngestClientDocumentationIT.java @@ -44,7 +44,7 @@ import org.opensearch.action.ingest.SimulatePipelineRequest; import org.opensearch.action.ingest.SimulatePipelineResponse; import org.opensearch.action.ingest.SimulateProcessorResult; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchRestHighLevelClientTestCase; import org.opensearch.client.RequestOptions; import org.opensearch.client.RestHighLevelClient; @@ -101,8 +101,8 @@ public void testPutPipeline() throws IOException { // end::put-pipeline-request-timeout // tag::put-pipeline-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::put-pipeline-request-masterTimeout // tag::put-pipeline-execute @@ -169,8 +169,8 @@ public void testGetPipeline() throws IOException { // end::get-pipeline-request // tag::get-pipeline-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::get-pipeline-request-masterTimeout // tag::get-pipeline-execute @@ -244,8 +244,8 @@ public void testDeletePipeline() throws IOException { // end::delete-pipeline-request-timeout // tag::delete-pipeline-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::delete-pipeline-request-masterTimeout // tag::delete-pipeline-execute diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/documentation/SnapshotClientDocumentationIT.java b/client/rest-high-level/src/test/java/org/opensearch/client/documentation/SnapshotClientDocumentationIT.java index 46473402ab69c..c70f5dbade5d3 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/documentation/SnapshotClientDocumentationIT.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/documentation/SnapshotClientDocumentationIT.java @@ -52,7 +52,7 @@ import org.opensearch.action.admin.cluster.snapshots.status.SnapshotsStatusRequest; import org.opensearch.action.admin.cluster.snapshots.status.SnapshotsStatusResponse; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchRestHighLevelClientTestCase; import org.opensearch.client.Request; import org.opensearch.client.RequestOptions; @@ -168,8 +168,8 @@ public void testSnapshotCreateRepository() throws IOException { // end::create-repository-request-type // tag::create-repository-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::create-repository-request-masterTimeout // tag::create-repository-request-timeout request.timeout(TimeValue.timeValueMinutes(1)); // <1> @@ -238,8 +238,8 @@ public void testSnapshotGetRepository() throws IOException { request.local(true); // <1> // end::get-repository-request-local // tag::get-repository-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::get-repository-request-masterTimeout // tag::get-repository-execute @@ -298,8 +298,8 @@ public void testRestoreSnapshot() throws IOException { // we need to restore as a different index name // tag::restore-snapshot-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::restore-snapshot-request-masterTimeout // tag::restore-snapshot-request-waitForCompletion @@ -395,8 +395,8 @@ public void testSnapshotDeleteRepository() throws IOException { // end::delete-repository-request // tag::delete-repository-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::delete-repository-request-masterTimeout // tag::delete-repository-request-timeout request.timeout(TimeValue.timeValueMinutes(1)); // <1> @@ -454,8 +454,8 @@ public void testSnapshotVerifyRepository() throws IOException { // end::verify-repository-request // tag::verify-repository-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::verify-repository-request-masterTimeout // tag::verify-repository-request-timeout request.timeout(TimeValue.timeValueMinutes(1)); // <1> @@ -544,8 +544,8 @@ public void testSnapshotCreate() throws IOException { // end::create-snapshot-request-includeGlobalState // tag::create-snapshot-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::create-snapshot-request-masterTimeout // tag::create-snapshot-request-waitForCompletion request.waitForCompletion(true); // <1> @@ -622,8 +622,8 @@ public void testSnapshotGetSnapshots() throws IOException { // end::get-snapshots-request-snapshots // tag::get-snapshots-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::get-snapshots-request-masterTimeout // tag::get-snapshots-request-verbose @@ -704,8 +704,8 @@ public void testSnapshotSnapshotsStatus() throws IOException { request.ignoreUnavailable(true); // <1> // end::snapshots-status-request-ignoreUnavailable // tag::snapshots-status-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::snapshots-status-request-masterTimeout // tag::snapshots-status-execute @@ -769,8 +769,8 @@ public void testSnapshotDeleteSnapshot() throws IOException { // end::delete-snapshot-request // tag::delete-snapshot-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::delete-snapshot-request-masterTimeout // tag::delete-snapshot-execute diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/documentation/StoredScriptsDocumentationIT.java b/client/rest-high-level/src/test/java/org/opensearch/client/documentation/StoredScriptsDocumentationIT.java index 0d36348a6a96d..11978a5377e1e 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/documentation/StoredScriptsDocumentationIT.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/documentation/StoredScriptsDocumentationIT.java @@ -38,7 +38,7 @@ import org.opensearch.action.admin.cluster.storedscripts.GetStoredScriptRequest; import org.opensearch.action.admin.cluster.storedscripts.GetStoredScriptResponse; import org.opensearch.action.admin.cluster.storedscripts.PutStoredScriptRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchRestHighLevelClientTestCase; import org.opensearch.client.RequestOptions; import org.opensearch.client.RestHighLevelClient; @@ -99,8 +99,8 @@ public void testGetStoredScript() throws Exception { // end::get-stored-script-request // tag::get-stored-script-request-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueSeconds(50)); // <1> - request.masterNodeTimeout("50s"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueSeconds(50)); // <1> + request.clusterManagerNodeTimeout("50s"); // <2> // end::get-stored-script-request-masterTimeout // tag::get-stored-script-execute @@ -162,8 +162,8 @@ public void testDeleteStoredScript() throws Exception { // end::delete-stored-script-request // tag::delete-stored-script-request-masterTimeout - deleteRequest.masterNodeTimeout(TimeValue.timeValueSeconds(50)); // <1> - deleteRequest.masterNodeTimeout("50s"); // <2> + deleteRequest.clusterManagerNodeTimeout(TimeValue.timeValueSeconds(50)); // <1> + deleteRequest.clusterManagerNodeTimeout("50s"); // <2> // end::delete-stored-script-request-masterTimeout // tag::delete-stored-script-request-timeout @@ -234,8 +234,8 @@ public void testPutScript() throws Exception { // end::put-stored-script-timeout // tag::put-stored-script-masterTimeout - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> - request.masterNodeTimeout("1m"); // <2> + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.clusterManagerNodeTimeout("1m"); // <2> // end::put-stored-script-masterTimeout } diff --git a/client/rest-high-level/src/test/java/org/opensearch/client/indices/CloseIndexResponseTests.java b/client/rest-high-level/src/test/java/org/opensearch/client/indices/CloseIndexResponseTests.java index a5c8086118fcd..3fa35f6fffd22 100644 --- a/client/rest-high-level/src/test/java/org/opensearch/client/indices/CloseIndexResponseTests.java +++ b/client/rest-high-level/src/test/java/org/opensearch/client/indices/CloseIndexResponseTests.java @@ -32,8 +32,8 @@ package org.opensearch.client.indices; import org.opensearch.OpenSearchStatusException; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; -import org.opensearch.action.support.clustermanager.ShardsAcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; +import org.opensearch.action.support.master.ShardsAcknowledgedResponse; import org.opensearch.client.AbstractResponseTestCase; import org.opensearch.common.bytes.BytesReference; import org.opensearch.common.xcontent.LoggingDeprecationHandler; diff --git a/modules/repository-url/src/internalClusterTest/java/org/opensearch/repositories/url/URLSnapshotRestoreIT.java b/modules/repository-url/src/internalClusterTest/java/org/opensearch/repositories/url/URLSnapshotRestoreIT.java index b819722d59f13..aa274549f3a9b 100644 --- a/modules/repository-url/src/internalClusterTest/java/org/opensearch/repositories/url/URLSnapshotRestoreIT.java +++ b/modules/repository-url/src/internalClusterTest/java/org/opensearch/repositories/url/URLSnapshotRestoreIT.java @@ -35,7 +35,7 @@ import org.opensearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse; import org.opensearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse; import org.opensearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.common.settings.Settings; import org.opensearch.common.unit.ByteSizeUnit; diff --git a/plugins/discovery-azure-classic/src/internalClusterTest/java/org/opensearch/discovery/azure/classic/AzureSimpleTests.java b/plugins/discovery-azure-classic/src/internalClusterTest/java/org/opensearch/discovery/azure/classic/AzureSimpleTests.java index f4268443a707a..f78fb4617d198 100644 --- a/plugins/discovery-azure-classic/src/internalClusterTest/java/org/opensearch/discovery/azure/classic/AzureSimpleTests.java +++ b/plugins/discovery-azure-classic/src/internalClusterTest/java/org/opensearch/discovery/azure/classic/AzureSimpleTests.java @@ -50,7 +50,9 @@ public void testOneNodeShouldRunUsingPrivateIp() { final String node1 = internalCluster().startNode(settings); registerAzureNode(node1); - assertNotNull(client().admin().cluster().prepareState().setMasterNodeTimeout("1s").get().getState().nodes().getMasterNodeId()); + assertNotNull( + client().admin().cluster().prepareState().setClusterManagerNodeTimeout("1s").get().getState().nodes().getMasterNodeId() + ); // We expect having 1 node as part of the cluster, let's test that assertNumberOfNodes(1); @@ -63,7 +65,9 @@ public void testOneNodeShouldRunUsingPublicIp() { final String node1 = internalCluster().startNode(settings); registerAzureNode(node1); - assertNotNull(client().admin().cluster().prepareState().setMasterNodeTimeout("1s").get().getState().nodes().getMasterNodeId()); + assertNotNull( + client().admin().cluster().prepareState().setClusterManagerNodeTimeout("1s").get().getState().nodes().getMasterNodeId() + ); // We expect having 1 node as part of the cluster, let's test that assertNumberOfNodes(1); diff --git a/plugins/discovery-azure-classic/src/internalClusterTest/java/org/opensearch/discovery/azure/classic/AzureTwoStartedNodesTests.java b/plugins/discovery-azure-classic/src/internalClusterTest/java/org/opensearch/discovery/azure/classic/AzureTwoStartedNodesTests.java index d8ea8a91fd21d..f0af35092c8ca 100644 --- a/plugins/discovery-azure-classic/src/internalClusterTest/java/org/opensearch/discovery/azure/classic/AzureTwoStartedNodesTests.java +++ b/plugins/discovery-azure-classic/src/internalClusterTest/java/org/opensearch/discovery/azure/classic/AzureTwoStartedNodesTests.java @@ -53,12 +53,16 @@ public void testTwoNodesShouldRunUsingPrivateOrPublicIp() { logger.info("--> start first node"); final String node1 = internalCluster().startNode(settings); registerAzureNode(node1); - assertNotNull(client().admin().cluster().prepareState().setMasterNodeTimeout("1s").get().getState().nodes().getMasterNodeId()); + assertNotNull( + client().admin().cluster().prepareState().setClusterManagerNodeTimeout("1s").get().getState().nodes().getMasterNodeId() + ); logger.info("--> start another node"); final String node2 = internalCluster().startNode(settings); registerAzureNode(node2); - assertNotNull(client().admin().cluster().prepareState().setMasterNodeTimeout("1s").get().getState().nodes().getMasterNodeId()); + assertNotNull( + client().admin().cluster().prepareState().setClusterManagerNodeTimeout("1s").get().getState().nodes().getMasterNodeId() + ); // We expect having 2 nodes as part of the cluster, let's test that assertNumberOfNodes(2); diff --git a/plugins/discovery-ec2/build.gradle b/plugins/discovery-ec2/build.gradle index 0e096958538a4..1766aa14ea9e9 100644 --- a/plugins/discovery-ec2/build.gradle +++ b/plugins/discovery-ec2/build.gradle @@ -38,10 +38,6 @@ opensearchplugin { classname 'org.opensearch.discovery.ec2.Ec2DiscoveryPlugin' } -versions << [ - 'aws': '1.11.749' -] - dependencies { api "com.amazonaws:aws-java-sdk-ec2:${versions.aws}" api "com.amazonaws:aws-java-sdk-core:${versions.aws}" diff --git a/plugins/discovery-ec2/licenses/aws-java-sdk-core-1.11.749.jar.sha1 b/plugins/discovery-ec2/licenses/aws-java-sdk-core-1.11.749.jar.sha1 deleted file mode 100644 index 7bc18d6d4f681..0000000000000 --- a/plugins/discovery-ec2/licenses/aws-java-sdk-core-1.11.749.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -1da5c1549295cfeebc67fc1c7539785a9441755b \ No newline at end of file diff --git a/plugins/discovery-ec2/licenses/aws-java-sdk-core-1.12.247.jar.sha1 b/plugins/discovery-ec2/licenses/aws-java-sdk-core-1.12.247.jar.sha1 new file mode 100644 index 0000000000000..5b3f4a3511769 --- /dev/null +++ b/plugins/discovery-ec2/licenses/aws-java-sdk-core-1.12.247.jar.sha1 @@ -0,0 +1 @@ +70f59d940c965a899f69743ec36a8eb099f539ef \ No newline at end of file diff --git a/plugins/discovery-ec2/licenses/aws-java-sdk-ec2-1.11.749.jar.sha1 b/plugins/discovery-ec2/licenses/aws-java-sdk-ec2-1.11.749.jar.sha1 deleted file mode 100644 index c7c7220005fc3..0000000000000 --- a/plugins/discovery-ec2/licenses/aws-java-sdk-ec2-1.11.749.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0865e0937c6500acf62ce9c8964eac76a8718f5f \ No newline at end of file diff --git a/plugins/discovery-ec2/licenses/aws-java-sdk-ec2-1.12.247.jar.sha1 b/plugins/discovery-ec2/licenses/aws-java-sdk-ec2-1.12.247.jar.sha1 new file mode 100644 index 0000000000000..03505417f3e26 --- /dev/null +++ b/plugins/discovery-ec2/licenses/aws-java-sdk-ec2-1.12.247.jar.sha1 @@ -0,0 +1 @@ +30120ff6617fb653d525856480d7ba99528d875d \ No newline at end of file diff --git a/plugins/discovery-gce/src/internalClusterTest/java/org/opensearch/discovery/gce/GceDiscoverTests.java b/plugins/discovery-gce/src/internalClusterTest/java/org/opensearch/discovery/gce/GceDiscoverTests.java index d5682940ccfee..de4f267547eb3 100644 --- a/plugins/discovery-gce/src/internalClusterTest/java/org/opensearch/discovery/gce/GceDiscoverTests.java +++ b/plugins/discovery-gce/src/internalClusterTest/java/org/opensearch/discovery/gce/GceDiscoverTests.java @@ -92,7 +92,7 @@ public void testJoin() { ClusterStateResponse clusterStateResponse = client(clusterManagerNode).admin() .cluster() .prepareState() - .setMasterNodeTimeout("1s") + .setClusterManagerNodeTimeout("1s") .clear() .setNodes(true) .get(); @@ -104,7 +104,7 @@ public void testJoin() { clusterStateResponse = client(secondNode).admin() .cluster() .prepareState() - .setMasterNodeTimeout("1s") + .setClusterManagerNodeTimeout("1s") .clear() .setNodes(true) .setLocal(true) diff --git a/plugins/mapper-size/src/internalClusterTest/java/org/opensearch/index/mapper/size/SizeMappingIT.java b/plugins/mapper-size/src/internalClusterTest/java/org/opensearch/index/mapper/size/SizeMappingIT.java index 24ec8f0eaf4c5..3a430331167f6 100644 --- a/plugins/mapper-size/src/internalClusterTest/java/org/opensearch/index/mapper/size/SizeMappingIT.java +++ b/plugins/mapper-size/src/internalClusterTest/java/org/opensearch/index/mapper/size/SizeMappingIT.java @@ -33,7 +33,7 @@ import org.opensearch.action.admin.indices.mapping.get.GetMappingsResponse; import org.opensearch.action.get.GetResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.xcontent.XContentBuilder; import org.opensearch.common.xcontent.XContentType; import org.opensearch.plugin.mapper.MapperSizePlugin; diff --git a/plugins/repository-azure/src/internalClusterTest/java/org/opensearch/repositories/azure/AzureStorageCleanupThirdPartyTests.java b/plugins/repository-azure/src/internalClusterTest/java/org/opensearch/repositories/azure/AzureStorageCleanupThirdPartyTests.java index fe4223a5aca87..6d71a65a35a4c 100644 --- a/plugins/repository-azure/src/internalClusterTest/java/org/opensearch/repositories/azure/AzureStorageCleanupThirdPartyTests.java +++ b/plugins/repository-azure/src/internalClusterTest/java/org/opensearch/repositories/azure/AzureStorageCleanupThirdPartyTests.java @@ -42,7 +42,7 @@ import org.junit.AfterClass; import org.opensearch.action.ActionRunnable; import org.opensearch.action.support.PlainActionFuture; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.Strings; import org.opensearch.common.collect.Tuple; import org.opensearch.common.settings.MockSecureSettings; diff --git a/plugins/repository-gcs/src/internalClusterTest/java/org/opensearch/repositories/gcs/GoogleCloudStorageThirdPartyTests.java b/plugins/repository-gcs/src/internalClusterTest/java/org/opensearch/repositories/gcs/GoogleCloudStorageThirdPartyTests.java index f4979c6caaddb..f1b2f78a37380 100644 --- a/plugins/repository-gcs/src/internalClusterTest/java/org/opensearch/repositories/gcs/GoogleCloudStorageThirdPartyTests.java +++ b/plugins/repository-gcs/src/internalClusterTest/java/org/opensearch/repositories/gcs/GoogleCloudStorageThirdPartyTests.java @@ -32,7 +32,7 @@ package org.opensearch.repositories.gcs; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.Strings; import org.opensearch.common.settings.MockSecureSettings; import org.opensearch.common.settings.SecureSettings; diff --git a/plugins/repository-hdfs/src/test/java/org/opensearch/repositories/hdfs/HdfsRepositoryTests.java b/plugins/repository-hdfs/src/test/java/org/opensearch/repositories/hdfs/HdfsRepositoryTests.java index d7209e47bff11..4e12de7cce212 100644 --- a/plugins/repository-hdfs/src/test/java/org/opensearch/repositories/hdfs/HdfsRepositoryTests.java +++ b/plugins/repository-hdfs/src/test/java/org/opensearch/repositories/hdfs/HdfsRepositoryTests.java @@ -34,7 +34,7 @@ import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters; import org.opensearch.action.admin.cluster.repositories.cleanup.CleanupRepositoryResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.settings.MockSecureSettings; import org.opensearch.common.settings.SecureSettings; import org.opensearch.common.settings.Settings; diff --git a/plugins/repository-hdfs/src/test/java/org/opensearch/repositories/hdfs/HdfsTests.java b/plugins/repository-hdfs/src/test/java/org/opensearch/repositories/hdfs/HdfsTests.java index 61a990b4d5525..d46d0b2092d2a 100644 --- a/plugins/repository-hdfs/src/test/java/org/opensearch/repositories/hdfs/HdfsTests.java +++ b/plugins/repository-hdfs/src/test/java/org/opensearch/repositories/hdfs/HdfsTests.java @@ -35,7 +35,7 @@ import org.opensearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse; import org.opensearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.cluster.ClusterState; import org.opensearch.common.settings.Settings; diff --git a/plugins/repository-s3/build.gradle b/plugins/repository-s3/build.gradle index ff6e2148fab37..e207b472ee665 100644 --- a/plugins/repository-s3/build.gradle +++ b/plugins/repository-s3/build.gradle @@ -44,10 +44,6 @@ opensearchplugin { classname 'org.opensearch.repositories.s3.S3RepositoryPlugin' } -versions << [ - 'aws': '1.11.749' -] - dependencies { api "com.amazonaws:aws-java-sdk-s3:${versions.aws}" api "com.amazonaws:aws-java-sdk-core:${versions.aws}" @@ -240,7 +236,7 @@ testClusters.yamlRestTest { setting 's3.client.integration_test_permanent.endpoint', { "${-> fixtureAddress('s3-fixture', 's3-fixture', '80')}" }, IGNORE_VALUE setting 's3.client.integration_test_temporary.endpoint', { "${-> fixtureAddress('s3-fixture', 's3-fixture-with-session-token', '80')}" }, IGNORE_VALUE setting 's3.client.integration_test_ec2.endpoint', { "${-> fixtureAddress('s3-fixture', 's3-fixture-with-ec2', '80')}" }, IGNORE_VALUE - setting 's3.client.integration_test_eks.endpoint', { "${-> fixtureAddress('s3-fixture', 's3-fixture-with-eks', '80')}" }, IGNORE_VALUE + setting 's3.client.integration_test_eks.endpoint', { "${-> fixtureAddress('s3-fixture', 's3-fixture-with-eks', '80')}" }, IGNORE_VALUE setting 's3.client.integration_test_eks.region', { "us-east-2" }, IGNORE_VALUE // to redirect InstanceProfileCredentialsProvider to custom auth point @@ -386,7 +382,8 @@ thirdPartyAudit.ignoreMissingClasses( 'com.amazonaws.services.kms.model.EncryptRequest', 'com.amazonaws.services.kms.model.EncryptResult', 'com.amazonaws.services.kms.model.GenerateDataKeyRequest', - 'com.amazonaws.services.kms.model.GenerateDataKeyResult' + 'com.amazonaws.services.kms.model.GenerateDataKeyResult', + 'com.amazonaws.services.kms.AWSKMSClientBuilder' ) // jarhell with jdk (intentionally, because jaxb was removed from default modules in java 9) diff --git a/plugins/repository-s3/licenses/aws-java-sdk-core-1.11.749.jar.sha1 b/plugins/repository-s3/licenses/aws-java-sdk-core-1.11.749.jar.sha1 deleted file mode 100644 index 7bc18d6d4f681..0000000000000 --- a/plugins/repository-s3/licenses/aws-java-sdk-core-1.11.749.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -1da5c1549295cfeebc67fc1c7539785a9441755b \ No newline at end of file diff --git a/plugins/repository-s3/licenses/aws-java-sdk-core-1.12.247.jar.sha1 b/plugins/repository-s3/licenses/aws-java-sdk-core-1.12.247.jar.sha1 new file mode 100644 index 0000000000000..5b3f4a3511769 --- /dev/null +++ b/plugins/repository-s3/licenses/aws-java-sdk-core-1.12.247.jar.sha1 @@ -0,0 +1 @@ +70f59d940c965a899f69743ec36a8eb099f539ef \ No newline at end of file diff --git a/plugins/repository-s3/licenses/aws-java-sdk-s3-1.11.749.jar.sha1 b/plugins/repository-s3/licenses/aws-java-sdk-s3-1.11.749.jar.sha1 deleted file mode 100644 index af794dc59dd7f..0000000000000 --- a/plugins/repository-s3/licenses/aws-java-sdk-s3-1.11.749.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -7d069f82723907ccdbd0c91ef0ac76046f5c9652 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/aws-java-sdk-s3-1.12.247.jar.sha1 b/plugins/repository-s3/licenses/aws-java-sdk-s3-1.12.247.jar.sha1 new file mode 100644 index 0000000000000..2d32399f871b4 --- /dev/null +++ b/plugins/repository-s3/licenses/aws-java-sdk-s3-1.12.247.jar.sha1 @@ -0,0 +1 @@ +648c59d979e2792b4aa8f444a4748abd62a65783 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/aws-java-sdk-sts-1.11.749.jar.sha1 b/plugins/repository-s3/licenses/aws-java-sdk-sts-1.11.749.jar.sha1 deleted file mode 100644 index 29c9a93542058..0000000000000 --- a/plugins/repository-s3/licenses/aws-java-sdk-sts-1.11.749.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -724bd22c0ff41c496469e18f9bea12bdfb2f7540 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/aws-java-sdk-sts-1.12.247.jar.sha1 b/plugins/repository-s3/licenses/aws-java-sdk-sts-1.12.247.jar.sha1 new file mode 100644 index 0000000000000..31ce4a4e6a5cb --- /dev/null +++ b/plugins/repository-s3/licenses/aws-java-sdk-sts-1.12.247.jar.sha1 @@ -0,0 +1 @@ +3e77a7409ccf7ef3c3d342897dd75590147d2ffe \ No newline at end of file diff --git a/plugins/repository-s3/licenses/jmespath-java-1.11.749.jar.sha1 b/plugins/repository-s3/licenses/jmespath-java-1.11.749.jar.sha1 deleted file mode 100644 index 3467802d074c7..0000000000000 --- a/plugins/repository-s3/licenses/jmespath-java-1.11.749.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -778866bc557dba508ee0eab2a0c5bfde468e49e6 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/jmespath-java-1.12.247.jar.sha1 b/plugins/repository-s3/licenses/jmespath-java-1.12.247.jar.sha1 new file mode 100644 index 0000000000000..fd71f57f9a5fc --- /dev/null +++ b/plugins/repository-s3/licenses/jmespath-java-1.12.247.jar.sha1 @@ -0,0 +1 @@ +a1f7acde495f815af705490f2a37b3758299a8e4 \ No newline at end of file diff --git a/plugins/repository-s3/src/internalClusterTest/java/org/opensearch/repositories/s3/S3RepositoryThirdPartyTests.java b/plugins/repository-s3/src/internalClusterTest/java/org/opensearch/repositories/s3/S3RepositoryThirdPartyTests.java index 952d8214cb91f..bc2839d066092 100644 --- a/plugins/repository-s3/src/internalClusterTest/java/org/opensearch/repositories/s3/S3RepositoryThirdPartyTests.java +++ b/plugins/repository-s3/src/internalClusterTest/java/org/opensearch/repositories/s3/S3RepositoryThirdPartyTests.java @@ -31,7 +31,7 @@ package org.opensearch.repositories.s3; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.blobstore.BlobMetadata; import org.opensearch.common.blobstore.BlobPath; import org.opensearch.common.settings.MockSecureSettings; diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/180_percentiles_tdigest_metric.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/180_percentiles_tdigest_metric.yml index 9ed414f6b8439..53d0ed1b2d05f 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/180_percentiles_tdigest_metric.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/180_percentiles_tdigest_metric.yml @@ -44,9 +44,16 @@ setup: string_field: foo --- -"Basic test": +"Basic 2.x test": + + - skip: + version: "3.0.0 -" + features: node_selector + reason: "t-digest 3.2 was interpolating leading to incorrect percentiles" - do: + node_selector: + version: "- 2.9.99" search: rest_total_hits_as_int: true body: @@ -77,7 +84,58 @@ setup: - match: { aggregations.percentiles_double.values.95\.0: 151.0 } - match: { aggregations.percentiles_double.values.99\.0: 151.0 } +--- +"Basic 3.x test": + + - skip: + version: "- 2.9.99" + features: node_selector + reason: "t-digest 3.2 was interpolating leading to incorrect percentiles" + - do: + node_selector: + version: "3.0.0 -" + search: + rest_total_hits_as_int: true + body: + aggs: + percentiles_int: + percentiles: + field: int_field + percentiles_double: + percentiles: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + + - match: { aggregations.percentiles_int.values.1\.0: 1.0 } + - match: { aggregations.percentiles_int.values.5\.0: 1.0 } + - match: { aggregations.percentiles_int.values.25\.0: 51.0 } + - match: { aggregations.percentiles_int.values.50\.0: 101.0 } + - match: { aggregations.percentiles_int.values.75\.0: 151.0 } + - match: { aggregations.percentiles_int.values.95\.0: 151.0 } + - match: { aggregations.percentiles_int.values.99\.0: 151.0 } + + - match: { aggregations.percentiles_double.values.1\.0: 1.0 } + - match: { aggregations.percentiles_double.values.5\.0: 1.0 } + - match: { aggregations.percentiles_double.values.25\.0: 51.0 } + - match: { aggregations.percentiles_double.values.50\.0: 101.0 } + - match: { aggregations.percentiles_double.values.75\.0: 151.0 } + - match: { aggregations.percentiles_double.values.95\.0: 151.0 } + - match: { aggregations.percentiles_double.values.99\.0: 151.0 } + +--- +"Compression test": + + - skip: + version: "- 2.9.99" + features: node_selector + reason: "t-digest 3.2 was interpolating leading to incorrect percentiles" + + - do: + node_selector: + version: "3.0.0 -" search: rest_total_hits_as_int: true body: @@ -93,31 +151,36 @@ setup: tdigest: compression: 200 - - match: { hits.total: 4 } - length: { hits.hits: 4 } - match: { aggregations.percentiles_int.values.1\.0: 1.0 } - match: { aggregations.percentiles_int.values.5\.0: 1.0 } - - match: { aggregations.percentiles_int.values.25\.0: 26.0 } - - match: { aggregations.percentiles_int.values.50\.0: 76.0 } - - match: { aggregations.percentiles_int.values.75\.0: 126.0 } + - match: { aggregations.percentiles_int.values.25\.0: 51.0 } + - match: { aggregations.percentiles_int.values.50\.0: 101.0 } + - match: { aggregations.percentiles_int.values.75\.0: 151.0 } - match: { aggregations.percentiles_int.values.95\.0: 151.0 } - match: { aggregations.percentiles_int.values.99\.0: 151.0 } - match: { aggregations.percentiles_double.values.1\.0: 1.0 } - match: { aggregations.percentiles_double.values.5\.0: 1.0 } - - match: { aggregations.percentiles_double.values.25\.0: 26.0 } - - match: { aggregations.percentiles_double.values.50\.0: 76.0 } - - match: { aggregations.percentiles_double.values.75\.0: 126.0 } + - match: { aggregations.percentiles_double.values.25\.0: 51.0 } + - match: { aggregations.percentiles_double.values.50\.0: 101.0 } + - match: { aggregations.percentiles_double.values.75\.0: 151.0 } - match: { aggregations.percentiles_double.values.95\.0: 151.0 } - match: { aggregations.percentiles_double.values.99\.0: 151.0 } - --- "Only aggs test": + - skip: + version: "- 2.9.99" + features: node_selector + reason: "t-digest 3.2 was interpolating leading to incorrect percentiles" + - do: + node_selector: + version: "3.0.0 -" search: rest_total_hits_as_int: true body: @@ -135,26 +198,31 @@ setup: - match: { aggregations.percentiles_int.values.1\.0: 1.0 } - match: { aggregations.percentiles_int.values.5\.0: 1.0 } - - match: { aggregations.percentiles_int.values.25\.0: 26.0 } - - match: { aggregations.percentiles_int.values.50\.0: 76.0 } - - match: { aggregations.percentiles_int.values.75\.0: 126.0 } + - match: { aggregations.percentiles_int.values.25\.0: 51.0 } + - match: { aggregations.percentiles_int.values.50\.0: 101.0 } + - match: { aggregations.percentiles_int.values.75\.0: 151.0 } - match: { aggregations.percentiles_int.values.95\.0: 151.0 } - match: { aggregations.percentiles_int.values.99\.0: 151.0 } - match: { aggregations.percentiles_double.values.1\.0: 1.0 } - match: { aggregations.percentiles_double.values.5\.0: 1.0 } - - match: { aggregations.percentiles_double.values.25\.0: 26.0 } - - match: { aggregations.percentiles_double.values.50\.0: 76.0 } - - match: { aggregations.percentiles_double.values.75\.0: 126.0 } + - match: { aggregations.percentiles_double.values.25\.0: 51.0 } + - match: { aggregations.percentiles_double.values.50\.0: 101.0 } + - match: { aggregations.percentiles_double.values.75\.0: 151.0 } - match: { aggregations.percentiles_double.values.95\.0: 151.0 } - match: { aggregations.percentiles_double.values.99\.0: 151.0 } - - --- "Filtered test": + - skip: + version: "- 2.9.99" + features: node_selector + reason: "t-digest 3.2 was interpolating leading to incorrect percentiles" + - do: + node_selector: + version: "3.0.0 -" search: rest_total_hits_as_int: true body: @@ -177,17 +245,17 @@ setup: - match: { aggregations.percentiles_int.values.1\.0: 51.0 } - match: { aggregations.percentiles_int.values.5\.0: 51.0 } - - match: { aggregations.percentiles_int.values.25\.0: 63.5 } + - match: { aggregations.percentiles_int.values.25\.0: 51.0 } - match: { aggregations.percentiles_int.values.50\.0: 101.0 } - - match: { aggregations.percentiles_int.values.75\.0: 138.5 } + - match: { aggregations.percentiles_int.values.75\.0: 151.0 } - match: { aggregations.percentiles_int.values.95\.0: 151.0 } - match: { aggregations.percentiles_int.values.99\.0: 151.0 } - match: { aggregations.percentiles_double.values.1\.0: 51.0 } - match: { aggregations.percentiles_double.values.5\.0: 51.0 } - - match: { aggregations.percentiles_double.values.25\.0: 63.5 } + - match: { aggregations.percentiles_double.values.25\.0: 51.0 } - match: { aggregations.percentiles_double.values.50\.0: 101.0 } - - match: { aggregations.percentiles_double.values.75\.0: 138.5 } + - match: { aggregations.percentiles_double.values.75\.0: 151.0 } - match: { aggregations.percentiles_double.values.95\.0: 151.0 } - match: { aggregations.percentiles_double.values.99\.0: 151.0 } @@ -234,7 +302,14 @@ setup: --- "Metadata test": + - skip: + version: "- 2.9.99" + features: node_selector + reason: "t-digest 3.2 was interpolating leading to incorrect percentiles" + - do: + node_selector: + version: "3.0.0 -" search: rest_total_hits_as_int: true body: @@ -252,9 +327,9 @@ setup: - match: { aggregations.percentiles_int.values.1\.0: 1.0 } - match: { aggregations.percentiles_int.values.5\.0: 1.0 } - - match: { aggregations.percentiles_int.values.25\.0: 26.0 } - - match: { aggregations.percentiles_int.values.50\.0: 76.0 } - - match: { aggregations.percentiles_int.values.75\.0: 126.0 } + - match: { aggregations.percentiles_int.values.25\.0: 51.0 } + - match: { aggregations.percentiles_int.values.50\.0: 101.0 } + - match: { aggregations.percentiles_int.values.75\.0: 151.0 } - match: { aggregations.percentiles_int.values.95\.0: 151.0 } - match: { aggregations.percentiles_int.values.99\.0: 151.0 } @@ -319,7 +394,14 @@ setup: --- "Explicit Percents test": + - skip: + version: "- 2.9.99" + features: node_selector + reason: "t-digest 3.2 was interpolating leading to incorrect percentiles" + - do: + node_selector: + version: "3.0.0 -" search: rest_total_hits_as_int: true body: @@ -338,17 +420,24 @@ setup: - length: { hits.hits: 4 } - match: { aggregations.percentiles_int.values.5\.0: 1.0 } - - match: { aggregations.percentiles_int.values.25\.0: 26.0 } - - match: { aggregations.percentiles_int.values.50\.0: 76.0 } + - match: { aggregations.percentiles_int.values.25\.0: 51.0 } + - match: { aggregations.percentiles_int.values.50\.0: 101.0 } - match: { aggregations.percentiles_double.values.5\.0: 1.0 } - - match: { aggregations.percentiles_double.values.25\.0: 26.0 } - - match: { aggregations.percentiles_double.values.50\.0: 76.0 } + - match: { aggregations.percentiles_double.values.25\.0: 51.0 } + - match: { aggregations.percentiles_double.values.50\.0: 101.0 } --- "Non-keyed test": + - skip: + version: "- 2.9.99" + features: node_selector + reason: "t-digest 3.2 was interpolating leading to incorrect percentiles" + - do: + node_selector: + version: "3.0.0 -" search: rest_total_hits_as_int: true body: @@ -366,6 +455,6 @@ setup: - match: { aggregations.percentiles_int.values.0.key: 5.0 } - match: { aggregations.percentiles_int.values.0.value: 1.0 } - match: { aggregations.percentiles_int.values.1.key: 25.0 } - - match: { aggregations.percentiles_int.values.1.value: 26.0 } + - match: { aggregations.percentiles_int.values.1.value: 51.0 } - match: { aggregations.percentiles_int.values.2.key: 50.0 } - - match: { aggregations.percentiles_int.values.2.value: 76.0 } + - match: { aggregations.percentiles_int.values.2.value: 101.0 } diff --git a/server/build.gradle b/server/build.gradle index 4490b2ea170cf..9d9d12e798eab 100644 --- a/server/build.gradle +++ b/server/build.gradle @@ -119,7 +119,7 @@ dependencies { api "joda-time:joda-time:${versions.joda}" // percentiles aggregation - api 'com.tdunning:t-digest:3.2' + api 'com.tdunning:t-digest:3.3' // precentil ranks aggregation api 'org.hdrhistogram:HdrHistogram:2.1.12' diff --git a/server/licenses/t-digest-3.2.jar.sha1 b/server/licenses/t-digest-3.2.jar.sha1 deleted file mode 100644 index de6e848545f38..0000000000000 --- a/server/licenses/t-digest-3.2.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -2ab94758b0276a8a26102adf8d528cf6d0567b9a \ No newline at end of file diff --git a/server/licenses/t-digest-3.3.jar.sha1 b/server/licenses/t-digest-3.3.jar.sha1 new file mode 100644 index 0000000000000..79319da60ead6 --- /dev/null +++ b/server/licenses/t-digest-3.3.jar.sha1 @@ -0,0 +1 @@ +5e96c4fd7d63b05828cf5ef41da20649195b1b78 \ No newline at end of file diff --git a/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/state/TransportClusterStateActionDisruptionIT.java b/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/state/TransportClusterStateActionDisruptionIT.java index b7538f6752ec4..8720de848bf14 100644 --- a/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/state/TransportClusterStateActionDisruptionIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/state/TransportClusterStateActionDisruptionIT.java @@ -75,7 +75,7 @@ public void testNonLocalRequestAlwaysFindsClusterManager() throws Exception { .prepareState() .clear() .setNodes(true) - .setMasterNodeTimeout("100ms"); + .setClusterManagerNodeTimeout("100ms"); final ClusterStateResponse clusterStateResponse; try { clusterStateResponse = clusterStateRequestBuilder.get(); @@ -95,7 +95,7 @@ public void testLocalRequestAlwaysSucceeds() throws Exception { .clear() .setLocal(true) .setNodes(true) - .setMasterNodeTimeout("100ms") + .setClusterManagerNodeTimeout("100ms") .get() .getState() .nodes(); @@ -123,7 +123,7 @@ public void testNonLocalRequestAlwaysFindsClusterManagerAndWaitsForMetadata() th .clear() .setNodes(true) .setMetadata(true) - .setMasterNodeTimeout(TimeValue.timeValueMillis(100)) + .setClusterManagerNodeTimeout(TimeValue.timeValueMillis(100)) .setWaitForTimeOut(TimeValue.timeValueMillis(100)) .setWaitForMetadataVersion(waitForMetadataVersion); final ClusterStateResponse clusterStateResponse; @@ -156,7 +156,7 @@ public void testLocalRequestWaitsForMetadata() throws Exception { .setLocal(true) .setMetadata(true) .setWaitForMetadataVersion(waitForMetadataVersion) - .setMasterNodeTimeout(TimeValue.timeValueMillis(100)) + .setClusterManagerNodeTimeout(TimeValue.timeValueMillis(100)) .setWaitForTimeOut(TimeValue.timeValueMillis(100)) .get(); if (clusterStateResponse.isWaitForTimedOut() == false) { diff --git a/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/CreateIndexIT.java b/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/CreateIndexIT.java index e772583697cb9..3ef2a63c7d0ac 100644 --- a/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/CreateIndexIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/CreateIndexIT.java @@ -40,7 +40,7 @@ import org.opensearch.action.search.SearchResponse; import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.metadata.MappingMetadata; diff --git a/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/datastream/DataStreamTestCase.java b/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/datastream/DataStreamTestCase.java index 8f2bdbdcc5973..7b0d917504a2f 100644 --- a/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/datastream/DataStreamTestCase.java +++ b/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/datastream/DataStreamTestCase.java @@ -12,7 +12,7 @@ import org.opensearch.action.admin.indices.rollover.RolloverResponse; import org.opensearch.action.admin.indices.template.delete.DeleteComposableIndexTemplateAction; import org.opensearch.action.admin.indices.template.put.PutComposableIndexTemplateAction; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.metadata.ComposableIndexTemplate; import org.opensearch.cluster.metadata.DataStream; import org.opensearch.cluster.metadata.Template; diff --git a/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/exists/IndicesExistsIT.java b/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/exists/IndicesExistsIT.java index 810d8b9f2d226..7aade84fdef7f 100644 --- a/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/exists/IndicesExistsIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/exists/IndicesExistsIT.java @@ -53,7 +53,7 @@ public void testIndexExistsWithBlocksInPlace() throws IOException { String node = internalCluster().startNode(settings); assertRequestBuilderThrows( - client(node).admin().indices().prepareExists("test").setMasterNodeTimeout(TimeValue.timeValueSeconds(0)), + client(node).admin().indices().prepareExists("test").setClusterManagerNodeTimeout(TimeValue.timeValueSeconds(0)), MasterNotDiscoveredException.class ); diff --git a/server/src/internalClusterTest/java/org/opensearch/action/bulk/BulkIntegrationIT.java b/server/src/internalClusterTest/java/org/opensearch/action/bulk/BulkIntegrationIT.java index 93f75e3918391..e2a1363f163da 100644 --- a/server/src/internalClusterTest/java/org/opensearch/action/bulk/BulkIntegrationIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/action/bulk/BulkIntegrationIT.java @@ -41,7 +41,7 @@ import org.opensearch.action.index.IndexRequest; import org.opensearch.action.index.IndexResponse; import org.opensearch.action.ingest.PutPipelineRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.replication.ReplicationRequest; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.common.bytes.BytesReference; diff --git a/server/src/internalClusterTest/java/org/opensearch/aliases/IndexAliasesIT.java b/server/src/internalClusterTest/java/org/opensearch/aliases/IndexAliasesIT.java index 46a5dc421fbb6..574046509de75 100644 --- a/server/src/internalClusterTest/java/org/opensearch/aliases/IndexAliasesIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/aliases/IndexAliasesIT.java @@ -42,7 +42,7 @@ import org.opensearch.action.search.SearchResponse; import org.opensearch.action.support.IndicesOptions; import org.opensearch.action.support.WriteRequest.RefreshPolicy; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.metadata.AliasMetadata; import org.opensearch.cluster.metadata.IndexAbstraction; diff --git a/server/src/internalClusterTest/java/org/opensearch/blocks/SimpleBlocksIT.java b/server/src/internalClusterTest/java/org/opensearch/blocks/SimpleBlocksIT.java index f1f5260f8f2f0..8ede3e25b2e1a 100644 --- a/server/src/internalClusterTest/java/org/opensearch/blocks/SimpleBlocksIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/blocks/SimpleBlocksIT.java @@ -42,7 +42,7 @@ import org.opensearch.action.index.IndexRequestBuilder; import org.opensearch.action.index.IndexResponse; import org.opensearch.action.support.ActiveShardCount; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; import org.opensearch.cluster.block.ClusterBlockLevel; diff --git a/server/src/internalClusterTest/java/org/opensearch/cluster/ClusterHealthIT.java b/server/src/internalClusterTest/java/org/opensearch/cluster/ClusterHealthIT.java index 5381dcfe4bdd2..14eaeb1e6dfcf 100644 --- a/server/src/internalClusterTest/java/org/opensearch/cluster/ClusterHealthIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/cluster/ClusterHealthIT.java @@ -374,7 +374,7 @@ public void testHealthOnClusterManagerFailover() throws Exception { .prepareHealth() .setWaitForEvents(Priority.LANGUID) .setWaitForGreenStatus() - .setMasterNodeTimeout(TimeValue.timeValueMinutes(2)) + .setClusterManagerNodeTimeout(TimeValue.timeValueMinutes(2)) .execute() ); internalCluster().restartNode(internalCluster().getMasterName(), InternalTestCluster.EMPTY_CALLBACK); diff --git a/server/src/internalClusterTest/java/org/opensearch/cluster/SpecificClusterManagerNodesIT.java b/server/src/internalClusterTest/java/org/opensearch/cluster/SpecificClusterManagerNodesIT.java index a58a195939db0..2f81299d76db9 100644 --- a/server/src/internalClusterTest/java/org/opensearch/cluster/SpecificClusterManagerNodesIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/cluster/SpecificClusterManagerNodesIT.java @@ -64,7 +64,7 @@ public void testSimpleOnlyClusterManagerNodeElection() throws IOException { client().admin() .cluster() .prepareState() - .setMasterNodeTimeout("100ms") + .setClusterManagerNodeTimeout("100ms") .execute() .actionGet() .getState() @@ -114,7 +114,7 @@ public void testSimpleOnlyClusterManagerNodeElection() throws IOException { client().admin() .cluster() .prepareState() - .setMasterNodeTimeout("100ms") + .setClusterManagerNodeTimeout("100ms") .execute() .actionGet() .getState() @@ -168,7 +168,7 @@ public void testElectOnlyBetweenClusterManagerNodes() throws Exception { client().admin() .cluster() .prepareState() - .setMasterNodeTimeout("100ms") + .setClusterManagerNodeTimeout("100ms") .execute() .actionGet() .getState() diff --git a/server/src/internalClusterTest/java/org/opensearch/cluster/coordination/RareClusterStateIT.java b/server/src/internalClusterTest/java/org/opensearch/cluster/coordination/RareClusterStateIT.java index f5273803fa716..61b186c951ce8 100644 --- a/server/src/internalClusterTest/java/org/opensearch/cluster/coordination/RareClusterStateIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/cluster/coordination/RareClusterStateIT.java @@ -40,7 +40,7 @@ import org.opensearch.action.ActionRequestBuilder; import org.opensearch.action.ActionResponse; import org.opensearch.action.index.IndexResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.ClusterStateUpdateTask; import org.opensearch.cluster.block.ClusterBlocks; diff --git a/server/src/internalClusterTest/java/org/opensearch/cluster/shards/ClusterShardLimitIT.java b/server/src/internalClusterTest/java/org/opensearch/cluster/shards/ClusterShardLimitIT.java index 1259a011147b8..a92849a077376 100644 --- a/server/src/internalClusterTest/java/org/opensearch/cluster/shards/ClusterShardLimitIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/cluster/shards/ClusterShardLimitIT.java @@ -38,7 +38,7 @@ import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsResponse; import org.opensearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse; import org.opensearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.metadata.Metadata; diff --git a/server/src/internalClusterTest/java/org/opensearch/index/seqno/RetentionLeaseIT.java b/server/src/internalClusterTest/java/org/opensearch/index/seqno/RetentionLeaseIT.java index df62797e1194d..ed6074b39c8a7 100644 --- a/server/src/internalClusterTest/java/org/opensearch/index/seqno/RetentionLeaseIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/index/seqno/RetentionLeaseIT.java @@ -34,7 +34,7 @@ import org.opensearch.OpenSearchException; import org.opensearch.action.ActionListener; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.replication.ReplicationResponse; import org.opensearch.cluster.node.DiscoveryNode; import org.opensearch.cluster.routing.ShardRouting; diff --git a/server/src/internalClusterTest/java/org/opensearch/indices/IndicesOptionsIntegrationIT.java b/server/src/internalClusterTest/java/org/opensearch/indices/IndicesOptionsIntegrationIT.java index 2504e676acf41..1f3d865811939 100644 --- a/server/src/internalClusterTest/java/org/opensearch/indices/IndicesOptionsIntegrationIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/indices/IndicesOptionsIntegrationIT.java @@ -51,7 +51,7 @@ import org.opensearch.action.search.SearchRequestBuilder; import org.opensearch.action.search.SearchResponse; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.Strings; import org.opensearch.common.settings.Setting; import org.opensearch.common.settings.Setting.Property; diff --git a/server/src/internalClusterTest/java/org/opensearch/indices/mapping/UpdateMappingIntegrationIT.java b/server/src/internalClusterTest/java/org/opensearch/indices/mapping/UpdateMappingIntegrationIT.java index 4e6c6519c2055..da3dcdc6b750e 100644 --- a/server/src/internalClusterTest/java/org/opensearch/indices/mapping/UpdateMappingIntegrationIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/indices/mapping/UpdateMappingIntegrationIT.java @@ -36,7 +36,7 @@ import org.opensearch.action.admin.indices.refresh.RefreshResponse; import org.opensearch.action.index.IndexRequestBuilder; import org.opensearch.action.search.SearchResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.cluster.action.index.MappingUpdatedAction; import org.opensearch.cluster.metadata.MappingMetadata; @@ -306,7 +306,7 @@ public void testUpdateMappingConcurrently() throws Throwable { .endObject() .endObject() ) - .setMasterNodeTimeout(TimeValue.timeValueMinutes(5)) + .setClusterManagerNodeTimeout(TimeValue.timeValueMinutes(5)) .get(); assertThat(response.isAcknowledged(), equalTo(true)); diff --git a/server/src/internalClusterTest/java/org/opensearch/indices/state/CloseWhileRelocatingShardsIT.java b/server/src/internalClusterTest/java/org/opensearch/indices/state/CloseWhileRelocatingShardsIT.java index 11587d1232ec1..3d70622e122c0 100644 --- a/server/src/internalClusterTest/java/org/opensearch/indices/state/CloseWhileRelocatingShardsIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/indices/state/CloseWhileRelocatingShardsIT.java @@ -33,7 +33,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.opensearch.action.admin.cluster.reroute.ClusterRerouteRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.node.DiscoveryNode; import org.opensearch.cluster.routing.IndexRoutingTable; diff --git a/server/src/internalClusterTest/java/org/opensearch/indices/state/OpenCloseIndexIT.java b/server/src/internalClusterTest/java/org/opensearch/indices/state/OpenCloseIndexIT.java index df5372d65fda3..ca1e1399f8fdc 100644 --- a/server/src/internalClusterTest/java/org/opensearch/indices/state/OpenCloseIndexIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/indices/state/OpenCloseIndexIT.java @@ -41,7 +41,7 @@ import org.opensearch.action.search.SearchResponse; import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.common.Strings; diff --git a/server/src/internalClusterTest/java/org/opensearch/ingest/IngestClientIT.java b/server/src/internalClusterTest/java/org/opensearch/ingest/IngestClientIT.java index fbfb4c3c3479d..404b13aae5b9c 100644 --- a/server/src/internalClusterTest/java/org/opensearch/ingest/IngestClientIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/ingest/IngestClientIT.java @@ -48,7 +48,7 @@ import org.opensearch.action.ingest.SimulateDocumentBaseResult; import org.opensearch.action.ingest.SimulatePipelineRequest; import org.opensearch.action.ingest.SimulatePipelineResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.update.UpdateRequest; import org.opensearch.client.Requests; import org.opensearch.common.bytes.BytesReference; diff --git a/server/src/internalClusterTest/java/org/opensearch/ingest/IngestProcessorNotInstalledOnAllNodesIT.java b/server/src/internalClusterTest/java/org/opensearch/ingest/IngestProcessorNotInstalledOnAllNodesIT.java index 585e4755a54ad..a615cceffb5df 100644 --- a/server/src/internalClusterTest/java/org/opensearch/ingest/IngestProcessorNotInstalledOnAllNodesIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/ingest/IngestProcessorNotInstalledOnAllNodesIT.java @@ -33,7 +33,7 @@ package org.opensearch.ingest; import org.opensearch.OpenSearchParseException; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.bytes.BytesReference; import org.opensearch.common.xcontent.XContentType; import org.opensearch.node.NodeService; diff --git a/server/src/internalClusterTest/java/org/opensearch/search/suggest/CompletionSuggestSearchIT.java b/server/src/internalClusterTest/java/org/opensearch/search/suggest/CompletionSuggestSearchIT.java index 0fb856efdda1e..690564fe1cac8 100644 --- a/server/src/internalClusterTest/java/org/opensearch/search/suggest/CompletionSuggestSearchIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/search/suggest/CompletionSuggestSearchIT.java @@ -42,7 +42,7 @@ import org.opensearch.action.index.IndexRequestBuilder; import org.opensearch.action.search.SearchPhaseExecutionException; import org.opensearch.action.search.SearchResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.FieldMemoryStats; import org.opensearch.common.settings.Settings; import org.opensearch.common.unit.Fuzziness; diff --git a/server/src/internalClusterTest/java/org/opensearch/snapshots/CloneSnapshotIT.java b/server/src/internalClusterTest/java/org/opensearch/snapshots/CloneSnapshotIT.java index 147e0e98e5b33..d5f36608941d5 100644 --- a/server/src/internalClusterTest/java/org/opensearch/snapshots/CloneSnapshotIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/snapshots/CloneSnapshotIT.java @@ -35,7 +35,7 @@ import org.opensearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse; import org.opensearch.action.admin.cluster.snapshots.status.SnapshotIndexStatus; import org.opensearch.action.admin.cluster.snapshots.status.SnapshotStatus; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.cluster.SnapshotsInProgress; import org.opensearch.common.unit.TimeValue; diff --git a/server/src/internalClusterTest/java/org/opensearch/snapshots/ConcurrentSnapshotsIT.java b/server/src/internalClusterTest/java/org/opensearch/snapshots/ConcurrentSnapshotsIT.java index 08059b49213ee..04ec3f027f908 100644 --- a/server/src/internalClusterTest/java/org/opensearch/snapshots/ConcurrentSnapshotsIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/snapshots/ConcurrentSnapshotsIT.java @@ -43,7 +43,7 @@ import org.opensearch.action.admin.cluster.snapshots.status.SnapshotsStatusResponse; import org.opensearch.action.support.GroupedActionListener; import org.opensearch.action.support.PlainActionFuture; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.SnapshotDeletionsInProgress; import org.opensearch.cluster.SnapshotsInProgress; diff --git a/server/src/internalClusterTest/java/org/opensearch/snapshots/DedicatedClusterSnapshotRestoreIT.java b/server/src/internalClusterTest/java/org/opensearch/snapshots/DedicatedClusterSnapshotRestoreIT.java index 29b58eab9b865..2eca8555e1388 100644 --- a/server/src/internalClusterTest/java/org/opensearch/snapshots/DedicatedClusterSnapshotRestoreIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/snapshots/DedicatedClusterSnapshotRestoreIT.java @@ -48,7 +48,7 @@ import org.opensearch.action.index.IndexRequestBuilder; import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.PlainActionFuture; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.client.node.NodeClient; import org.opensearch.cluster.ClusterState; diff --git a/server/src/internalClusterTest/java/org/opensearch/snapshots/RepositoriesIT.java b/server/src/internalClusterTest/java/org/opensearch/snapshots/RepositoriesIT.java index 27aeda1262db6..e72110f4c4efd 100644 --- a/server/src/internalClusterTest/java/org/opensearch/snapshots/RepositoriesIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/snapshots/RepositoriesIT.java @@ -35,7 +35,7 @@ import org.opensearch.action.admin.cluster.repositories.get.GetRepositoriesResponse; import org.opensearch.action.admin.cluster.repositories.verify.VerifyRepositoryResponse; import org.opensearch.action.admin.cluster.state.ClusterStateResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.cluster.metadata.Metadata; import org.opensearch.cluster.metadata.RepositoriesMetadata; diff --git a/server/src/internalClusterTest/java/org/opensearch/snapshots/SharedClusterSnapshotRestoreIT.java b/server/src/internalClusterTest/java/org/opensearch/snapshots/SharedClusterSnapshotRestoreIT.java index 88fcd075a563f..fa04bfbf4e959 100644 --- a/server/src/internalClusterTest/java/org/opensearch/snapshots/SharedClusterSnapshotRestoreIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/snapshots/SharedClusterSnapshotRestoreIT.java @@ -588,7 +588,7 @@ public void testDataFileCorruptionDuringRestore() throws Exception { RestoreSnapshotResponse restoreSnapshotResponse = client.admin() .cluster() .prepareRestoreSnapshot("test-repo", "test-snap") - .setMasterNodeTimeout("30s") + .setClusterManagerNodeTimeout("30s") .setWaitForCompletion(true) .execute() .actionGet(); diff --git a/server/src/main/java/org/opensearch/OpenSearchException.java b/server/src/main/java/org/opensearch/OpenSearchException.java index a6a12d7ebb4f7..2de25aa1cd6dc 100644 --- a/server/src/main/java/org/opensearch/OpenSearchException.java +++ b/server/src/main/java/org/opensearch/OpenSearchException.java @@ -67,6 +67,7 @@ import static java.util.Collections.emptyMap; import static java.util.Collections.singletonMap; import static java.util.Collections.unmodifiableMap; +import static org.opensearch.Version.V_2_1_0; import static org.opensearch.cluster.metadata.IndexMetadata.INDEX_UUID_NA_VALUE; import static org.opensearch.common.xcontent.XContentParserUtils.ensureExpectedToken; import static org.opensearch.common.xcontent.XContentParserUtils.ensureFieldName; @@ -1594,6 +1595,12 @@ private enum OpenSearchExceptionHandle { org.opensearch.transport.NoSeedNodeLeftException::new, 160, LegacyESVersion.V_7_10_0 + ), + REPLICATION_FAILED_EXCEPTION( + org.opensearch.indices.replication.common.ReplicationFailedException.class, + org.opensearch.indices.replication.common.ReplicationFailedException::new, + 161, + V_2_1_0 ); final Class exceptionClass; diff --git a/server/src/main/java/org/opensearch/Version.java b/server/src/main/java/org/opensearch/Version.java index 1672378fb8225..a43e9c3220741 100644 --- a/server/src/main/java/org/opensearch/Version.java +++ b/server/src/main/java/org/opensearch/Version.java @@ -91,7 +91,8 @@ public class Version implements Comparable, ToXContentFragment { public static final Version V_2_0_0 = new Version(2000099, org.apache.lucene.util.Version.LUCENE_9_1_0); public static final Version V_2_0_1 = new Version(2000199, org.apache.lucene.util.Version.LUCENE_9_1_0); public static final Version V_2_0_2 = new Version(2000299, org.apache.lucene.util.Version.LUCENE_9_1_0); - public static final Version V_2_1_0 = new Version(2010099, org.apache.lucene.util.Version.LUCENE_9_3_0); + public static final Version V_2_1_0 = new Version(2010099, org.apache.lucene.util.Version.LUCENE_9_2_0); + public static final Version V_2_2_0 = new Version(2020099, org.apache.lucene.util.Version.LUCENE_9_3_0); public static final Version V_3_0_0 = new Version(3000099, org.apache.lucene.util.Version.LUCENE_9_3_0); public static final Version CURRENT = V_3_0_0; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java index 16721d5a9ec07..7060ac43af7f9 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java @@ -125,7 +125,7 @@ protected ClusterBlockException checkBlock(ClusterAllocationExplainRequest reque } @Override - protected void masterOperation( + protected void clusterManagerOperation( final ClusterAllocationExplainRequest request, final ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/configuration/TransportAddVotingConfigExclusionsAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/configuration/TransportAddVotingConfigExclusionsAction.java index ab72ce964668f..d0f5e8f198809 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/configuration/TransportAddVotingConfigExclusionsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/configuration/TransportAddVotingConfigExclusionsAction.java @@ -126,7 +126,7 @@ protected AddVotingConfigExclusionsResponse read(StreamInput in) throws IOExcept } @Override - protected void masterOperation( + protected void clusterManagerOperation( AddVotingConfigExclusionsRequest request, ClusterState state, ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/configuration/TransportClearVotingConfigExclusionsAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/configuration/TransportClearVotingConfigExclusionsAction.java index 3a9da6cebef53..1fc02db4309b1 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/configuration/TransportClearVotingConfigExclusionsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/configuration/TransportClearVotingConfigExclusionsAction.java @@ -101,7 +101,7 @@ protected ClearVotingConfigExclusionsResponse read(StreamInput in) throws IOExce } @Override - protected void masterOperation( + protected void clusterManagerOperation( ClearVotingConfigExclusionsRequest request, ClusterState initialState, ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/health/ClusterHealthRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/health/ClusterHealthRequest.java index 8567694bd3880..1dedf481dec56 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/health/ClusterHealthRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/health/ClusterHealthRequest.java @@ -159,8 +159,8 @@ public TimeValue timeout() { public ClusterHealthRequest timeout(TimeValue timeout) { this.timeout = timeout; - if (masterNodeTimeout == DEFAULT_MASTER_NODE_TIMEOUT) { - masterNodeTimeout = timeout; + if (clusterManagerNodeTimeout == DEFAULT_CLUSTER_MANAGER_NODE_TIMEOUT) { + clusterManagerNodeTimeout = timeout; } return this; } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/health/TransportClusterHealthAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/health/TransportClusterHealthAction.java index 6120317dfeace..1cfd7fa090db6 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/health/TransportClusterHealthAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/health/TransportClusterHealthAction.java @@ -118,14 +118,17 @@ protected ClusterBlockException checkBlock(ClusterHealthRequest request, Cluster } @Override - protected final void masterOperation(ClusterHealthRequest request, ClusterState state, ActionListener listener) - throws Exception { + protected final void clusterManagerOperation( + ClusterHealthRequest request, + ClusterState state, + ActionListener listener + ) throws Exception { logger.warn("attempt to execute a cluster health operation without a task"); throw new UnsupportedOperationException("task parameter is required for this operation"); } @Override - protected void masterOperation( + protected void clusterManagerOperation( final Task task, final ClusterHealthRequest request, final ClusterState unusedState, diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/cleanup/CleanupRepositoryRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/cleanup/CleanupRepositoryRequest.java index 852ef9e2b173b..0f265681cd241 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/cleanup/CleanupRepositoryRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/cleanup/CleanupRepositoryRequest.java @@ -32,7 +32,7 @@ package org.opensearch.action.admin.cluster.repositories.cleanup; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/cleanup/TransportCleanupRepositoryAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/cleanup/TransportCleanupRepositoryAction.java index 25513e6c9d7da..74c79ba7107f5 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/cleanup/TransportCleanupRepositoryAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/cleanup/TransportCleanupRepositoryAction.java @@ -174,7 +174,7 @@ protected CleanupRepositoryResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation( + protected void clusterManagerOperation( CleanupRepositoryRequest request, ClusterState state, ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryAction.java index 5f17afe2abf76..2031e4f7a716f 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.repositories.delete; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Unregister repository action diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryRequest.java index 2e28a3fd4f41d..a3f4bb768c649 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryRequest.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.repositories.delete; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryRequestBuilder.java index f2fcb0bd8857c..ffef8d5b41979 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/DeleteRepositoryRequestBuilder.java @@ -32,8 +32,8 @@ package org.opensearch.action.admin.cluster.repositories.delete; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchClient; /** diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/TransportDeleteRepositoryAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/TransportDeleteRepositoryAction.java index 6ce7f411e7ef4..08e3bc6df0d83 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/TransportDeleteRepositoryAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/delete/TransportDeleteRepositoryAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -95,7 +95,7 @@ protected ClusterBlockException checkBlock(DeleteRepositoryRequest request, Clus } @Override - protected void masterOperation( + protected void clusterManagerOperation( final DeleteRepositoryRequest request, ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java index 6e61752c78656..de942ef284f3b 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java @@ -99,7 +99,7 @@ protected ClusterBlockException checkBlock(GetRepositoriesRequest request, Clust } @Override - protected void masterOperation( + protected void clusterManagerOperation( final GetRepositoriesRequest request, ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryAction.java index 9e56d1dfb3560..c2f90d869d873 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.repositories.put; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Register repository action diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java index 8ab8d40936c67..1bdc8e024447d 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.repositories.put; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; import org.opensearch.common.settings.Settings; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java index bcf6aeceebedd..6e1b2795b6375 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java @@ -32,8 +32,8 @@ package org.opensearch.action.admin.cluster.repositories.put; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchClient; import org.opensearch.common.settings.Settings; import org.opensearch.common.xcontent.XContentType; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/TransportPutRepositoryAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/TransportPutRepositoryAction.java index 1f4603ab87070..6a5be14df93fd 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/TransportPutRepositoryAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/put/TransportPutRepositoryAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -95,7 +95,7 @@ protected ClusterBlockException checkBlock(PutRepositoryRequest request, Cluster } @Override - protected void masterOperation( + protected void clusterManagerOperation( final PutRepositoryRequest request, ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/verify/TransportVerifyRepositoryAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/verify/TransportVerifyRepositoryAction.java index a673f34058a83..5215078f52d3b 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/verify/TransportVerifyRepositoryAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/verify/TransportVerifyRepositoryAction.java @@ -95,7 +95,7 @@ protected ClusterBlockException checkBlock(VerifyRepositoryRequest request, Clus } @Override - protected void masterOperation( + protected void clusterManagerOperation( final VerifyRepositoryRequest request, ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/verify/VerifyRepositoryRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/verify/VerifyRepositoryRequest.java index 3cd28e9a05206..001030f6a67f5 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/verify/VerifyRepositoryRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/verify/VerifyRepositoryRequest.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.repositories.verify; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequest.java index ad50b7c44aec4..806fa80691202 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequest.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.reroute; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.cluster.routing.allocation.command.AllocationCommand; import org.opensearch.cluster.routing.allocation.command.AllocationCommands; import org.opensearch.common.io.stream.StreamInput; @@ -163,12 +163,12 @@ public boolean equals(Object obj) { && Objects.equals(explain, other.explain) && Objects.equals(timeout, other.timeout) && Objects.equals(retryFailed, other.retryFailed) - && Objects.equals(masterNodeTimeout, other.masterNodeTimeout); + && Objects.equals(clusterManagerNodeTimeout, other.clusterManagerNodeTimeout); } @Override public int hashCode() { // Override equals and hashCode for testing - return Objects.hash(commands, dryRun, explain, timeout, retryFailed, masterNodeTimeout); + return Objects.hash(commands, dryRun, explain, timeout, retryFailed, clusterManagerNodeTimeout); } } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequestBuilder.java index 30eb0a4f36b3a..01d52cb43320d 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequestBuilder.java @@ -32,7 +32,7 @@ package org.opensearch.action.admin.cluster.reroute; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; import org.opensearch.client.OpenSearchClient; import org.opensearch.cluster.routing.allocation.command.AllocationCommand; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteResponse.java b/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteResponse.java index 9f0609a77b1c6..dcddc98bdc43a 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteResponse.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteResponse.java @@ -32,7 +32,7 @@ package org.opensearch.action.admin.cluster.reroute; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.routing.allocation.RoutingExplanations; import org.opensearch.common.io.stream.StreamInput; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java index 5080ce2c0fd67..3e5ebdd6a17d3 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java @@ -119,7 +119,7 @@ protected ClusterRerouteResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation( + protected void clusterManagerOperation( final ClusterRerouteRequest request, final ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java index 50ca3ee204797..f3f7db03ac67e 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.settings; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.ParseField; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java index 2978b27d726db..4d08c94f78b6a 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java @@ -32,7 +32,7 @@ package org.opensearch.action.admin.cluster.settings; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; import org.opensearch.client.OpenSearchClient; import org.opensearch.common.settings.Settings; import org.opensearch.common.xcontent.XContentType; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsResponse.java b/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsResponse.java index f7a66572fb174..a4edd1d99148a 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsResponse.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/settings/ClusterUpdateSettingsResponse.java @@ -32,7 +32,7 @@ package org.opensearch.action.admin.cluster.settings; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.ParseField; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java index 0799a8bd22b45..d3e7df6e16dac 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java @@ -124,7 +124,7 @@ protected ClusterUpdateSettingsResponse read(StreamInput in) throws IOException } @Override - protected void masterOperation( + protected void clusterManagerOperation( final ClusterUpdateSettingsRequest request, final ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java index ae2d2aeb827ba..2f7c194e0acd7 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java @@ -108,7 +108,7 @@ protected ClusterSearchShardsResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation( + protected void clusterManagerOperation( final ClusterSearchShardsRequest request, final ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/CloneSnapshotAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/CloneSnapshotAction.java index 189b6aa7b7544..c6fe102544a7e 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/CloneSnapshotAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/CloneSnapshotAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.snapshots.clone; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Transport action for cloning a snapshot diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/CloneSnapshotRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/CloneSnapshotRequestBuilder.java index 14e87bd622cf2..a9472937040e9 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/CloneSnapshotRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/CloneSnapshotRequestBuilder.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionType; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.ClusterManagerNodeOperationRequestBuilder; import org.opensearch.client.OpenSearchClient; import org.opensearch.common.Strings; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/TransportCloneSnapshotAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/TransportCloneSnapshotAction.java index c1946792f43db..e9f5153f78700 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/TransportCloneSnapshotAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/clone/TransportCloneSnapshotAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -90,7 +90,11 @@ protected AcknowledgedResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation(CloneSnapshotRequest request, ClusterState state, ActionListener listener) { + protected void clusterManagerOperation( + CloneSnapshotRequest request, + ClusterState state, + ActionListener listener + ) { snapshotsService.cloneSnapshot(request, ActionListener.map(listener, v -> new AcknowledgedResponse(true))); } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java index 7b4a92497c41b..d78a4c95246b4 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java @@ -521,7 +521,7 @@ public boolean equals(Object o) { && Arrays.equals(indices, that.indices) && Objects.equals(indicesOptions, that.indicesOptions) && Objects.equals(settings, that.settings) - && Objects.equals(masterNodeTimeout, that.masterNodeTimeout) + && Objects.equals(clusterManagerNodeTimeout, that.clusterManagerNodeTimeout) && Objects.equals(userMetadata, that.userMetadata); } @@ -562,8 +562,8 @@ public String toString() { + includeGlobalState + ", waitForCompletion=" + waitForCompletion - + ", masterNodeTimeout=" - + masterNodeTimeout + + ", clusterManagerNodeTimeout=" + + clusterManagerNodeTimeout + ", metadata=" + userMetadata + '}'; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java index 4b28bafc258cf..ed4af6d915792 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java @@ -98,7 +98,7 @@ protected ClusterBlockException checkBlock(CreateSnapshotRequest request, Cluste } @Override - protected void masterOperation( + protected void clusterManagerOperation( final CreateSnapshotRequest request, ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/DeleteSnapshotAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/DeleteSnapshotAction.java index 60d9cadc0aede..0b98a4b31fd53 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/DeleteSnapshotAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/DeleteSnapshotAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.snapshots.delete; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Delete snapshot action diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/DeleteSnapshotRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/DeleteSnapshotRequestBuilder.java index ad41d94227da8..f61c58d449a02 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/DeleteSnapshotRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/DeleteSnapshotRequestBuilder.java @@ -32,7 +32,7 @@ package org.opensearch.action.admin.cluster.snapshots.delete; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.ClusterManagerNodeOperationRequestBuilder; import org.opensearch.client.OpenSearchClient; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java index 7bbd91a4b4a03..c78968c2a0848 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -95,7 +95,7 @@ protected ClusterBlockException checkBlock(DeleteSnapshotRequest request, Cluste } @Override - protected void masterOperation( + protected void clusterManagerOperation( final DeleteSnapshotRequest request, ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java index 0be3f8be0bc80..d05e62045a1a2 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java @@ -122,7 +122,7 @@ protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterS } @Override - protected void masterOperation( + protected void clusterManagerOperation( final GetSnapshotsRequest request, final ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java index fa7c0c6efa469..e7d95b9e40880 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java @@ -102,7 +102,7 @@ protected ClusterBlockException checkBlock(RestoreSnapshotRequest request, Clust } @Override - protected void masterOperation( + protected void clusterManagerOperation( final RestoreSnapshotRequest request, final ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java index 31b19848f59f4..bd7391a7939a1 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java @@ -136,7 +136,7 @@ protected SnapshotsStatusResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation( + protected void clusterManagerOperation( final SnapshotsStatusRequest request, final ClusterState state, final ActionListener listener @@ -169,7 +169,7 @@ protected void masterOperation( } transportNodesSnapshotsStatus.execute( new TransportNodesSnapshotsStatus.Request(nodesIds.toArray(Strings.EMPTY_ARRAY)).snapshots(snapshots) - .timeout(request.masterNodeTimeout()), + .timeout(request.clusterManagerNodeTimeout()), ActionListener.wrap( nodeSnapshotStatuses -> threadPool.generic() .execute( diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/state/TransportClusterStateAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/state/TransportClusterStateAction.java index 673153c40bf46..503f5eecc8e30 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/state/TransportClusterStateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/state/TransportClusterStateAction.java @@ -115,7 +115,7 @@ protected ClusterBlockException checkBlock(ClusterStateRequest request, ClusterS } @Override - protected void masterOperation( + protected void clusterManagerOperation( final ClusterStateRequest request, final ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptAction.java index 483004a3365c5..3645ef21d2e12 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.storedscripts; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Transport action for deleting stored scripts diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java index a23f2fea698fd..93d2c3ba3c452 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.storedscripts; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java index c77ebfa85422f..34e0d429f2098 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java @@ -32,8 +32,8 @@ package org.opensearch.action.admin.cluster.storedscripts; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchClient; /** diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptAction.java index cc571c2f26136..2845d895a69e8 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.storedscripts; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Transport action for putting stored script diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java index 8b9eb83bb531c..2bddf2823f962 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.cluster.storedscripts; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.bytes.BytesReference; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java index b829cc3466f70..ef3c14df29627 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java @@ -32,8 +32,8 @@ package org.opensearch.action.admin.cluster.storedscripts; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchClient; import org.opensearch.common.bytes.BytesReference; import org.opensearch.common.xcontent.XContentType; diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportDeleteStoredScriptAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportDeleteStoredScriptAction.java index 60990e14e1a57..4bc8d836a8200 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportDeleteStoredScriptAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportDeleteStoredScriptAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -90,8 +90,11 @@ protected AcknowledgedResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation(DeleteStoredScriptRequest request, ClusterState state, ActionListener listener) - throws Exception { + protected void clusterManagerOperation( + DeleteStoredScriptRequest request, + ClusterState state, + ActionListener listener + ) throws Exception { scriptService.deleteStoredScript(clusterService, request, listener); } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java index d2d5a49fcde23..8dbadf34ab06b 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java @@ -89,8 +89,11 @@ protected GetStoredScriptResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation(GetStoredScriptRequest request, ClusterState state, ActionListener listener) - throws Exception { + protected void clusterManagerOperation( + GetStoredScriptRequest request, + ClusterState state, + ActionListener listener + ) throws Exception { listener.onResponse(new GetStoredScriptResponse(request.id(), scriptService.getStoredScript(state, request))); } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java index c8ae0c213b3dc..bb259f173d470 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -90,8 +90,11 @@ protected AcknowledgedResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation(PutStoredScriptRequest request, ClusterState state, ActionListener listener) - throws Exception { + protected void clusterManagerOperation( + PutStoredScriptRequest request, + ClusterState state, + ActionListener listener + ) throws Exception { scriptService.putStoredScript(clusterService, request, listener); } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java index 8962f0395cc6f..e08a76c5cfc2a 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java @@ -100,7 +100,7 @@ protected ClusterBlockException checkBlock(PendingClusterTasksRequest request, C } @Override - protected void masterOperation( + protected void clusterManagerOperation( PendingClusterTasksRequest request, ClusterState state, ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesAction.java b/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesAction.java index 9ce10c2853ff6..4d735e984c34e 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.alias; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Transport action for listing index aliases diff --git a/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequest.java index 0119b892dadf8..62f51aa3f3bff 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequest.java @@ -37,7 +37,7 @@ import org.opensearch.action.ActionRequestValidationException; import org.opensearch.action.AliasesRequest; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.cluster.metadata.AliasAction; import org.opensearch.common.ParseField; import org.opensearch.common.ParsingException; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequestBuilder.java index ebc1fc9e9e2ce..13c57cc781925 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequestBuilder.java @@ -32,8 +32,8 @@ package org.opensearch.action.admin.indices.alias; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchClient; import org.opensearch.index.query.QueryBuilder; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/alias/TransportIndicesAliasesAction.java b/server/src/main/java/org/opensearch/action/admin/indices/alias/TransportIndicesAliasesAction.java index 90bc246fe34e7..3e453b42c3d7c 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/alias/TransportIndicesAliasesAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/alias/TransportIndicesAliasesAction.java @@ -38,7 +38,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.RequestValidators; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.ack.ClusterStateUpdateResponse; @@ -126,7 +126,7 @@ protected ClusterBlockException checkBlock(IndicesAliasesRequest request, Cluste } @Override - protected void masterOperation( + protected void clusterManagerOperation( final IndicesAliasesRequest request, final ClusterState state, final ActionListener listener @@ -200,7 +200,7 @@ protected void masterOperation( request.aliasActions().clear(); IndicesAliasesClusterStateUpdateRequest updateRequest = new IndicesAliasesClusterStateUpdateRequest(unmodifiableList(finalActions)) .ackTimeout(request.timeout()) - .masterNodeTimeout(request.masterNodeTimeout()); + .masterNodeTimeout(request.clusterManagerNodeTimeout()); indexAliasesService.indicesAliases(updateRequest, new ActionListener() { @Override diff --git a/server/src/main/java/org/opensearch/action/admin/indices/alias/get/TransportGetAliasesAction.java b/server/src/main/java/org/opensearch/action/admin/indices/alias/get/TransportGetAliasesAction.java index a2f975ff9cbbc..fe9c2dbccdf7b 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/alias/get/TransportGetAliasesAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/alias/get/TransportGetAliasesAction.java @@ -112,7 +112,7 @@ protected GetAliasesResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation(GetAliasesRequest request, ClusterState state, ActionListener listener) { + protected void clusterManagerOperation(GetAliasesRequest request, ClusterState state, ActionListener listener) { String[] concreteIndices; // Switch to a context which will drop any deprecation warnings, because there may be indices resolved here which are not // returned in the final response. We'll add warnings back later if necessary in checkSystemIndexAccess. diff --git a/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexRequest.java index 529767a00af82..b16cabfda4d67 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexRequest.java @@ -37,7 +37,7 @@ import org.opensearch.action.IndicesRequest; import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; import org.opensearch.common.util.CollectionUtils; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexRequestBuilder.java index 15307c821178c..b3b53a0043c70 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexRequestBuilder.java @@ -34,7 +34,7 @@ import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; import org.opensearch.client.OpenSearchClient; /** diff --git a/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexResponse.java b/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexResponse.java index 4206b4e9e0926..1fc9017359a8c 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexResponse.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/close/CloseIndexResponse.java @@ -34,7 +34,7 @@ import org.opensearch.LegacyESVersion; import org.opensearch.OpenSearchException; import org.opensearch.action.support.DefaultShardOperationFailedException; -import org.opensearch.action.support.clustermanager.ShardsAcknowledgedResponse; +import org.opensearch.action.support.master.ShardsAcknowledgedResponse; import org.opensearch.common.Nullable; import org.opensearch.common.Strings; import org.opensearch.common.io.stream.StreamInput; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/close/TransportCloseIndexAction.java b/server/src/main/java/org/opensearch/action/admin/indices/close/TransportCloseIndexAction.java index 5f3ed38a05228..c4c789a8de90e 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/close/TransportCloseIndexAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/close/TransportCloseIndexAction.java @@ -140,7 +140,7 @@ protected ClusterBlockException checkBlock(CloseIndexRequest request, ClusterSta } @Override - protected void masterOperation( + protected void clusterManagerOperation( final CloseIndexRequest request, final ClusterState state, final ActionListener listener @@ -149,7 +149,7 @@ protected void masterOperation( } @Override - protected void masterOperation( + protected void clusterManagerOperation( final Task task, final CloseIndexRequest request, final ClusterState state, @@ -163,7 +163,10 @@ protected void masterOperation( final CloseIndexClusterStateUpdateRequest closeRequest = new CloseIndexClusterStateUpdateRequest(task.getId()).ackTimeout( request.timeout() - ).masterNodeTimeout(request.masterNodeTimeout()).waitForActiveShards(request.waitForActiveShards()).indices(concreteIndices); + ) + .masterNodeTimeout(request.clusterManagerNodeTimeout()) + .waitForActiveShards(request.waitForActiveShards()) + .indices(concreteIndices); indexStateService.closeIndices(closeRequest, ActionListener.delegateResponse(listener, (delegatedListener, t) -> { logger.debug(() -> new ParameterizedMessage("failed to close indices [{}]", (Object) concreteIndices), t); delegatedListener.onFailure(t); diff --git a/server/src/main/java/org/opensearch/action/admin/indices/create/AutoCreateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/create/AutoCreateAction.java index b931ab4a924ed..73a2996945aff 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/create/AutoCreateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/create/AutoCreateAction.java @@ -112,7 +112,11 @@ protected CreateIndexResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation(CreateIndexRequest request, ClusterState state, ActionListener finalListener) { + protected void clusterManagerOperation( + CreateIndexRequest request, + ClusterState state, + ActionListener finalListener + ) { AtomicReference indexNameRef = new AtomicReference<>(); ActionListener listener = ActionListener.wrap(response -> { String indexName = indexNameRef.get(); @@ -144,7 +148,7 @@ public ClusterState execute(ClusterState currentState) throws Exception { if (dataStreamTemplate != null) { CreateDataStreamClusterStateUpdateRequest createRequest = new CreateDataStreamClusterStateUpdateRequest( request.index(), - request.masterNodeTimeout(), + request.clusterManagerNodeTimeout(), request.timeout() ); ClusterState clusterState = metadataCreateDataStreamService.createDataStream(createRequest, currentState); @@ -157,7 +161,7 @@ public ClusterState execute(ClusterState currentState) throws Exception { request.cause(), indexName, request.index() - ).ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout()); + ).ackTimeout(request.timeout()).masterNodeTimeout(request.clusterManagerNodeTimeout()); return createIndexService.applyCreateIndexRequest(currentState, updateRequest, false); } } diff --git a/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexRequest.java index 28db8dad69084..95837d82be7ac 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexRequest.java @@ -42,7 +42,7 @@ import org.opensearch.action.admin.indices.mapping.put.PutMappingRequest; import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.ParseField; import org.opensearch.common.Strings; import org.opensearch.common.bytes.BytesArray; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexRequestBuilder.java index 2de77681aa127..4c5780b87b3f2 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexRequestBuilder.java @@ -34,7 +34,7 @@ import org.opensearch.action.admin.indices.alias.Alias; import org.opensearch.action.support.ActiveShardCount; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; import org.opensearch.client.OpenSearchClient; import org.opensearch.common.bytes.BytesReference; import org.opensearch.common.settings.Settings; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexResponse.java b/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexResponse.java index fca1f7cce71d9..871576d8e336a 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexResponse.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/create/CreateIndexResponse.java @@ -32,7 +32,7 @@ package org.opensearch.action.admin.indices.create; -import org.opensearch.action.support.clustermanager.ShardsAcknowledgedResponse; +import org.opensearch.action.support.master.ShardsAcknowledgedResponse; import org.opensearch.common.ParseField; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/create/TransportCreateIndexAction.java b/server/src/main/java/org/opensearch/action/admin/indices/create/TransportCreateIndexAction.java index 8b2a62304b71c..daba4d0f167f8 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/create/TransportCreateIndexAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/create/TransportCreateIndexAction.java @@ -95,7 +95,7 @@ protected ClusterBlockException checkBlock(CreateIndexRequest request, ClusterSt } @Override - protected void masterOperation( + protected void clusterManagerOperation( final CreateIndexRequest request, final ClusterState state, final ActionListener listener @@ -111,7 +111,7 @@ protected void masterOperation( indexName, request.index() ).ackTimeout(request.timeout()) - .masterNodeTimeout(request.masterNodeTimeout()) + .masterNodeTimeout(request.clusterManagerNodeTimeout()) .settings(request.settings()) .mappings(request.mappings()) .aliases(request.aliases()) diff --git a/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/DeleteDanglingIndexAction.java b/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/DeleteDanglingIndexAction.java index 2ccc422f2edd6..6559ef4cd89bd 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/DeleteDanglingIndexAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/DeleteDanglingIndexAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.dangling.delete; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * This action causes a dangling index to be considered as deleted by the cluster. diff --git a/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/DeleteDanglingIndexRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/DeleteDanglingIndexRequest.java index 3ded069dd6d89..4fad5498de375 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/DeleteDanglingIndexRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/DeleteDanglingIndexRequest.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.dangling.delete; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/TransportDeleteDanglingIndexAction.java b/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/TransportDeleteDanglingIndexAction.java index df3c5c4ff99ac..015a0f6727ab7 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/TransportDeleteDanglingIndexAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/dangling/delete/TransportDeleteDanglingIndexAction.java @@ -44,7 +44,7 @@ import org.opensearch.action.admin.indices.dangling.list.ListDanglingIndicesResponse; import org.opensearch.action.admin.indices.dangling.list.NodeListDanglingIndicesResponse; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.client.node.NodeClient; import org.opensearch.cluster.AckedClusterStateUpdateTask; @@ -115,7 +115,7 @@ protected AcknowledgedResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation( + protected void clusterManagerOperation( DeleteDanglingIndexRequest deleteRequest, ClusterState state, ActionListener deleteListener diff --git a/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/ImportDanglingIndexAction.java b/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/ImportDanglingIndexAction.java index 308720aa6139f..5f7a096b1d749 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/ImportDanglingIndexAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/ImportDanglingIndexAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.dangling.import_index; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Represents a request to import a particular dangling index. diff --git a/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/ImportDanglingIndexRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/ImportDanglingIndexRequest.java index 0b442e33f1e21..73fbad248b8b1 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/ImportDanglingIndexRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/ImportDanglingIndexRequest.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.dangling.import_index; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/TransportImportDanglingIndexAction.java b/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/TransportImportDanglingIndexAction.java index 1b6102cbbc2fd..2010515249371 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/TransportImportDanglingIndexAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/dangling/import_index/TransportImportDanglingIndexAction.java @@ -50,7 +50,7 @@ import org.opensearch.action.admin.indices.dangling.find.NodeFindDanglingIndexResponse; import org.opensearch.action.support.ActionFilters; import org.opensearch.action.support.HandledTransportAction; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.node.NodeClient; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.common.inject.Inject; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/datastream/CreateDataStreamAction.java b/server/src/main/java/org/opensearch/action/admin/indices/datastream/CreateDataStreamAction.java index c5c37e06137d2..ddc93dd1fcf6c 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/datastream/CreateDataStreamAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/datastream/CreateDataStreamAction.java @@ -38,8 +38,8 @@ import org.opensearch.action.ValidateActions; import org.opensearch.action.support.ActionFilters; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -162,11 +162,11 @@ protected AcknowledgedResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation(Request request, ClusterState state, ActionListener listener) + protected void clusterManagerOperation(Request request, ClusterState state, ActionListener listener) throws Exception { CreateDataStreamClusterStateUpdateRequest updateRequest = new CreateDataStreamClusterStateUpdateRequest( request.name, - request.masterNodeTimeout(), + request.clusterManagerNodeTimeout(), request.timeout() ); metadataCreateDataStreamService.createDataStream(updateRequest, listener); diff --git a/server/src/main/java/org/opensearch/action/admin/indices/datastream/DeleteDataStreamAction.java b/server/src/main/java/org/opensearch/action/admin/indices/datastream/DeleteDataStreamAction.java index 1b3485ad65203..74b0a84782283 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/datastream/DeleteDataStreamAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/datastream/DeleteDataStreamAction.java @@ -39,7 +39,7 @@ import org.opensearch.action.IndicesRequest; import org.opensearch.action.support.ActionFilters; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; @@ -192,7 +192,7 @@ protected AcknowledgedResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation(Request request, ClusterState state, ActionListener listener) + protected void clusterManagerOperation(Request request, ClusterState state, ActionListener listener) throws Exception { clusterService.submitStateUpdateTask( "remove-data-stream [" + Strings.arrayToCommaDelimitedString(request.names) + "]", @@ -200,7 +200,7 @@ protected void masterOperation(Request request, ClusterState state, ActionListen @Override public TimeValue timeout() { - return request.masterNodeTimeout(); + return request.clusterManagerNodeTimeout(); } @Override diff --git a/server/src/main/java/org/opensearch/action/admin/indices/datastream/GetDataStreamAction.java b/server/src/main/java/org/opensearch/action/admin/indices/datastream/GetDataStreamAction.java index 6140d10bd293c..61fad265e16e6 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/datastream/GetDataStreamAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/datastream/GetDataStreamAction.java @@ -313,7 +313,7 @@ protected Response read(StreamInput in) throws IOException { } @Override - protected void masterOperation(Request request, ClusterState state, ActionListener listener) throws Exception { + protected void clusterManagerOperation(Request request, ClusterState state, ActionListener listener) throws Exception { List dataStreams = getDataStreams(state, indexNameExpressionResolver, request); List dataStreamInfos = new ArrayList<>(dataStreams.size()); for (DataStream dataStream : dataStreams) { diff --git a/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexAction.java b/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexAction.java index a3aa9e751a8ec..696c1244c7504 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.delete; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Transport action for deleting an index diff --git a/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexRequest.java index b8100502a2e0a..7475121a910c4 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexRequest.java @@ -35,7 +35,7 @@ import org.opensearch.action.ActionRequestValidationException; import org.opensearch.action.IndicesRequest; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; import org.opensearch.common.util.CollectionUtils; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexRequestBuilder.java index a1cee63875a77..33f6342e94139 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/delete/DeleteIndexRequestBuilder.java @@ -33,8 +33,8 @@ package org.opensearch.action.admin.indices.delete; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchClient; /** diff --git a/server/src/main/java/org/opensearch/action/admin/indices/delete/TransportDeleteIndexAction.java b/server/src/main/java/org/opensearch/action/admin/indices/delete/TransportDeleteIndexAction.java index 0cc1a164603f6..a3d14846338e7 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/delete/TransportDeleteIndexAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/delete/TransportDeleteIndexAction.java @@ -38,7 +38,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; import org.opensearch.action.support.DestructiveOperations; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.ack.ClusterStateUpdateResponse; @@ -115,7 +115,7 @@ protected ClusterBlockException checkBlock(DeleteIndexRequest request, ClusterSt } @Override - protected void masterOperation( + protected void clusterManagerOperation( final DeleteIndexRequest request, final ClusterState state, final ActionListener listener @@ -127,7 +127,7 @@ protected void masterOperation( } DeleteIndexClusterStateUpdateRequest deleteRequest = new DeleteIndexClusterStateUpdateRequest().ackTimeout(request.timeout()) - .masterNodeTimeout(request.masterNodeTimeout()) + .masterNodeTimeout(request.clusterManagerNodeTimeout()) .indices(concreteIndices.toArray(new Index[concreteIndices.size()])); deleteIndexService.deleteIndices(deleteRequest, new ActionListener() { diff --git a/server/src/main/java/org/opensearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java b/server/src/main/java/org/opensearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java index a7f73a203f4c5..f5f7e0e9ea7b7 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java @@ -103,7 +103,7 @@ protected ClusterBlockException checkBlock(IndicesExistsRequest request, Cluster } @Override - protected void masterOperation( + protected void clusterManagerOperation( final IndicesExistsRequest request, final ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/indices/get/TransportGetIndexAction.java b/server/src/main/java/org/opensearch/action/admin/indices/get/TransportGetIndexAction.java index 0142e70d18221..de272bab332a7 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/get/TransportGetIndexAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/get/TransportGetIndexAction.java @@ -98,7 +98,7 @@ protected GetIndexResponse read(StreamInput in) throws IOException { } @Override - protected void doMasterOperation( + protected void doClusterManagerOperation( final GetIndexRequest request, String[] concreteIndices, final ClusterState state, diff --git a/server/src/main/java/org/opensearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java b/server/src/main/java/org/opensearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java index 71438ad300e0c..e724320728b66 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java @@ -88,7 +88,7 @@ protected GetMappingsResponse read(StreamInput in) throws IOException { } @Override - protected void doMasterOperation( + protected void doClusterManagerOperation( final GetMappingsRequest request, String[] concreteIndices, final ClusterState state, diff --git a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/AutoPutMappingAction.java b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/AutoPutMappingAction.java index 6f0cad2fe178d..f2430eb54db9b 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/AutoPutMappingAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/AutoPutMappingAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.mapping.put; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Transport action to automatically put field mappings. diff --git a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingAction.java b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingAction.java index 9088d1241ad2a..8bca1b59ee2e2 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.mapping.put; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Transport action to put field mappings. diff --git a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingRequest.java index a02dd620b8661..85fd74f0762a5 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingRequest.java @@ -39,8 +39,8 @@ import org.opensearch.action.ActionRequestValidationException; import org.opensearch.action.IndicesRequest; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.Strings; import org.opensearch.common.bytes.BytesArray; import org.opensearch.common.bytes.BytesReference; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java index f0e0876dbf877..78115e1fab4ec 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java @@ -33,8 +33,8 @@ package org.opensearch.action.admin.indices.mapping.put; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchClient; import org.opensearch.common.xcontent.XContentBuilder; import org.opensearch.common.xcontent.XContentType; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/TransportAutoPutMappingAction.java b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/TransportAutoPutMappingAction.java index e42a6841867ea..c4dad614c53dd 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/TransportAutoPutMappingAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/TransportAutoPutMappingAction.java @@ -33,7 +33,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -107,7 +107,7 @@ protected ClusterBlockException checkBlock(PutMappingRequest request, ClusterSta } @Override - protected void masterOperation( + protected void clusterManagerOperation( final PutMappingRequest request, final ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/TransportPutMappingAction.java b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/TransportPutMappingAction.java index 33385c421722c..de546f428bafa 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/TransportPutMappingAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/mapping/put/TransportPutMappingAction.java @@ -38,7 +38,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.RequestValidators; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.ack.ClusterStateUpdateResponse; @@ -119,7 +119,7 @@ protected ClusterBlockException checkBlock(PutMappingRequest request, ClusterSta } @Override - protected void masterOperation( + protected void clusterManagerOperation( final PutMappingRequest request, final ClusterState state, final ActionListener listener @@ -171,7 +171,7 @@ static void performMappingUpdate( ) { PutMappingClusterStateUpdateRequest updateRequest = new PutMappingClusterStateUpdateRequest(request.source()).indices( concreteIndices - ).ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout()); + ).ackTimeout(request.timeout()).masterNodeTimeout(request.clusterManagerNodeTimeout()); metadataMappingService.putMapping(updateRequest, new ActionListener() { diff --git a/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexRequest.java index 21c5fcd6ed1c2..c6c1c2dc8f0cb 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexRequest.java @@ -36,7 +36,7 @@ import org.opensearch.action.IndicesRequest; import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; import org.opensearch.common.util.CollectionUtils; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexRequestBuilder.java index 2760fb43a727f..bf09c3f173491 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexRequestBuilder.java @@ -34,7 +34,7 @@ import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; import org.opensearch.client.OpenSearchClient; /** diff --git a/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexResponse.java b/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexResponse.java index 38ec7226d3c68..f7bd4cf31aa17 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexResponse.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/open/OpenIndexResponse.java @@ -32,7 +32,7 @@ package org.opensearch.action.admin.indices.open; -import org.opensearch.action.support.clustermanager.ShardsAcknowledgedResponse; +import org.opensearch.action.support.master.ShardsAcknowledgedResponse; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; import org.opensearch.common.xcontent.ConstructingObjectParser; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/open/TransportOpenIndexAction.java b/server/src/main/java/org/opensearch/action/admin/indices/open/TransportOpenIndexAction.java index aa17027aa3e6a..2ccb5f6d22886 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/open/TransportOpenIndexAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/open/TransportOpenIndexAction.java @@ -114,7 +114,7 @@ protected ClusterBlockException checkBlock(OpenIndexRequest request, ClusterStat } @Override - protected void masterOperation( + protected void clusterManagerOperation( final OpenIndexRequest request, final ClusterState state, final ActionListener listener @@ -125,7 +125,7 @@ protected void masterOperation( return; } OpenIndexClusterStateUpdateRequest updateRequest = new OpenIndexClusterStateUpdateRequest().ackTimeout(request.timeout()) - .masterNodeTimeout(request.masterNodeTimeout()) + .masterNodeTimeout(request.clusterManagerNodeTimeout()) .indices(concreteIndices) .waitForActiveShards(request.waitForActiveShards()); diff --git a/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockRequest.java index 7715480fcaca5..7d208b5e0ac77 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockRequest.java @@ -35,7 +35,7 @@ import org.opensearch.action.ActionRequestValidationException; import org.opensearch.action.IndicesRequest; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.cluster.metadata.IndexMetadata.APIBlock; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockRequestBuilder.java index 66ff659c6a90a..8322ba19f433e 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockRequestBuilder.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.readonly; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; import org.opensearch.client.OpenSearchClient; import org.opensearch.cluster.metadata.IndexMetadata.APIBlock; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockResponse.java b/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockResponse.java index 22b12d195b9c3..6a07a645f9315 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockResponse.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/readonly/AddIndexBlockResponse.java @@ -33,7 +33,7 @@ import org.opensearch.OpenSearchException; import org.opensearch.action.support.DefaultShardOperationFailedException; -import org.opensearch.action.support.clustermanager.ShardsAcknowledgedResponse; +import org.opensearch.action.support.master.ShardsAcknowledgedResponse; import org.opensearch.common.Nullable; import org.opensearch.common.Strings; import org.opensearch.common.io.stream.StreamInput; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/readonly/TransportAddIndexBlockAction.java b/server/src/main/java/org/opensearch/action/admin/indices/readonly/TransportAddIndexBlockAction.java index b505ba5927f66..560d2e6389c63 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/readonly/TransportAddIndexBlockAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/readonly/TransportAddIndexBlockAction.java @@ -123,13 +123,13 @@ protected ClusterBlockException checkBlock(AddIndexBlockRequest request, Cluster } @Override - protected void masterOperation(AddIndexBlockRequest request, ClusterState state, ActionListener listener) + protected void clusterManagerOperation(AddIndexBlockRequest request, ClusterState state, ActionListener listener) throws Exception { throw new UnsupportedOperationException("The task parameter is required"); } @Override - protected void masterOperation( + protected void clusterManagerOperation( final Task task, final AddIndexBlockRequest request, final ClusterState state, @@ -144,7 +144,7 @@ protected void masterOperation( final AddIndexBlockClusterStateUpdateRequest addBlockRequest = new AddIndexBlockClusterStateUpdateRequest( request.getBlock(), task.getId() - ).ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout()).indices(concreteIndices); + ).ackTimeout(request.timeout()).masterNodeTimeout(request.clusterManagerNodeTimeout()).indices(concreteIndices); indexStateService.addIndexBlock(addBlockRequest, ActionListener.delegateResponse(listener, (delegatedListener, t) -> { logger.debug(() -> new ParameterizedMessage("failed to mark indices as readonly [{}]", (Object) concreteIndices), t); delegatedListener.onFailure(t); diff --git a/server/src/main/java/org/opensearch/action/admin/indices/rollover/MetadataRolloverService.java b/server/src/main/java/org/opensearch/action/admin/indices/rollover/MetadataRolloverService.java index e3e09442764e2..a40ac35091082 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/rollover/MetadataRolloverService.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/rollover/MetadataRolloverService.java @@ -293,7 +293,7 @@ static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest( b.put(settings); } return new CreateIndexClusterStateUpdateRequest(cause, targetIndexName, providedIndexName).ackTimeout(createIndexRequest.timeout()) - .masterNodeTimeout(createIndexRequest.masterNodeTimeout()) + .masterNodeTimeout(createIndexRequest.clusterManagerNodeTimeout()) .settings(b.build()) .aliases(createIndexRequest.aliases()) .waitForActiveShards(ActiveShardCount.NONE) // not waiting for shards here, will wait on the alias switch operation diff --git a/server/src/main/java/org/opensearch/action/admin/indices/rollover/RolloverRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/rollover/RolloverRequest.java index 3216fc9ce0b71..db5dd0af6ab2a 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/rollover/RolloverRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/rollover/RolloverRequest.java @@ -36,7 +36,7 @@ import org.opensearch.action.admin.indices.create.CreateIndexRequest; import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.ParseField; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/rollover/RolloverResponse.java b/server/src/main/java/org/opensearch/action/admin/indices/rollover/RolloverResponse.java index ed08595f55cea..330d258f9461f 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/rollover/RolloverResponse.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/rollover/RolloverResponse.java @@ -32,7 +32,7 @@ package org.opensearch.action.admin.indices.rollover; -import org.opensearch.action.support.clustermanager.ShardsAcknowledgedResponse; +import org.opensearch.action.support.master.ShardsAcknowledgedResponse; import org.opensearch.common.ParseField; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/rollover/TransportRolloverAction.java b/server/src/main/java/org/opensearch/action/admin/indices/rollover/TransportRolloverAction.java index 8ab8061039aa9..4e5e7ec9184fe 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/rollover/TransportRolloverAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/rollover/TransportRolloverAction.java @@ -130,13 +130,13 @@ protected ClusterBlockException checkBlock(RolloverRequest request, ClusterState } @Override - protected void masterOperation(RolloverRequest request, ClusterState state, ActionListener listener) + protected void clusterManagerOperation(RolloverRequest request, ClusterState state, ActionListener listener) throws Exception { throw new UnsupportedOperationException("The task parameter is required"); } @Override - protected void masterOperation( + protected void clusterManagerOperation( Task task, final RolloverRequest rolloverRequest, final ClusterState state, @@ -215,7 +215,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS activeShardsObserver.waitForActiveShards( new String[] { rolloverIndexName }, rolloverRequest.getCreateIndexRequest().waitForActiveShards(), - rolloverRequest.masterNodeTimeout(), + rolloverRequest.clusterManagerNodeTimeout(), isShardsAcknowledged -> listener.onResponse( new RolloverResponse( sourceIndexName, diff --git a/server/src/main/java/org/opensearch/action/admin/indices/settings/get/TransportGetSettingsAction.java b/server/src/main/java/org/opensearch/action/admin/indices/settings/get/TransportGetSettingsAction.java index 000d6d70d7af7..cfa75167afa09 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/settings/get/TransportGetSettingsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/settings/get/TransportGetSettingsAction.java @@ -110,7 +110,7 @@ private static boolean isFilteredRequest(GetSettingsRequest request) { } @Override - protected void masterOperation(GetSettingsRequest request, ClusterState state, ActionListener listener) { + protected void clusterManagerOperation(GetSettingsRequest request, ClusterState state, ActionListener listener) { Index[] concreteIndices = indexNameExpressionResolver.concreteIndices(state, request); ImmutableOpenMap.Builder indexToSettingsBuilder = ImmutableOpenMap.builder(); ImmutableOpenMap.Builder indexToDefaultSettingsBuilder = ImmutableOpenMap.builder(); diff --git a/server/src/main/java/org/opensearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java b/server/src/main/java/org/opensearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java index 4b6dd3a28c3bf..a959fb043e4c6 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java @@ -37,7 +37,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.ack.ClusterStateUpdateResponse; @@ -116,7 +116,7 @@ protected AcknowledgedResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation( + protected void clusterManagerOperation( final UpdateSettingsRequest request, final ClusterState state, final ActionListener listener @@ -128,7 +128,7 @@ protected void masterOperation( .settings(request.settings()) .setPreserveExisting(request.isPreserveExisting()) .ackTimeout(request.timeout()) - .masterNodeTimeout(request.masterNodeTimeout()); + .masterNodeTimeout(request.clusterManagerNodeTimeout()); updateSettingsService.updateSettings(clusterStateUpdateRequest, new ActionListener() { @Override diff --git a/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsAction.java b/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsAction.java index aa26acb7e3fc5..2333a2aad6bc6 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.settings.put; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Action for updating index settings diff --git a/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequest.java index fd6aac7696013..cab5f6bc58863 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequest.java @@ -35,7 +35,7 @@ import org.opensearch.action.ActionRequestValidationException; import org.opensearch.action.IndicesRequest; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.Strings; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; @@ -233,7 +233,7 @@ public boolean equals(Object o) { return false; } UpdateSettingsRequest that = (UpdateSettingsRequest) o; - return masterNodeTimeout.equals(that.masterNodeTimeout) + return clusterManagerNodeTimeout.equals(that.clusterManagerNodeTimeout) && timeout.equals(that.timeout) && Objects.equals(settings, that.settings) && Objects.equals(indicesOptions, that.indicesOptions) @@ -243,7 +243,7 @@ public boolean equals(Object o) { @Override public int hashCode() { - return Objects.hash(masterNodeTimeout, timeout, settings, indicesOptions, preserveExisting, Arrays.hashCode(indices)); + return Objects.hash(clusterManagerNodeTimeout, timeout, settings, indicesOptions, preserveExisting, Arrays.hashCode(indices)); } } diff --git a/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java index 459b16c2a9b7e..7501f0c7798de 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java @@ -33,8 +33,8 @@ package org.opensearch.action.admin.indices.settings.put; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchClient; import org.opensearch.common.settings.Settings; import org.opensearch.common.xcontent.XContentType; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java b/server/src/main/java/org/opensearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java index b2f2c9e5d03a3..cfe682e47f688 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java @@ -121,7 +121,7 @@ protected IndicesShardStoresResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation( + protected void clusterManagerOperation( IndicesShardStoresRequest request, ClusterState state, ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/indices/shrink/ResizeRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/shrink/ResizeRequest.java index 969263df5621a..50784e60a3f19 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/shrink/ResizeRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/shrink/ResizeRequest.java @@ -38,7 +38,7 @@ import org.opensearch.action.admin.indices.create.CreateIndexRequest; import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.common.ParseField; import org.opensearch.common.io.stream.StreamInput; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/shrink/ResizeRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/indices/shrink/ResizeRequestBuilder.java index 0dcaf1c524df5..418e83a5431ec 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/shrink/ResizeRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/shrink/ResizeRequestBuilder.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionType; import org.opensearch.action.admin.indices.create.CreateIndexRequest; import org.opensearch.action.support.ActiveShardCount; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; import org.opensearch.client.OpenSearchClient; import org.opensearch.common.settings.Settings; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/shrink/TransportResizeAction.java b/server/src/main/java/org/opensearch/action/admin/indices/shrink/TransportResizeAction.java index 7ebcafcd5549d..ba079aeb03921 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/shrink/TransportResizeAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/shrink/TransportResizeAction.java @@ -127,7 +127,7 @@ protected ClusterBlockException checkBlock(ResizeRequest request, ClusterState s } @Override - protected void masterOperation( + protected void clusterManagerOperation( final ResizeRequest resizeRequest, final ClusterState state, final ActionListener listener @@ -241,7 +241,7 @@ static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest( // applied once we took the snapshot and if somebody messes things up and switches the index read/write and adds docs we // miss the mappings for everything is corrupted and hard to debug .ackTimeout(targetIndex.timeout()) - .masterNodeTimeout(targetIndex.masterNodeTimeout()) + .masterNodeTimeout(targetIndex.clusterManagerNodeTimeout()) .settings(targetIndex.settings()) .aliases(targetIndex.aliases()) .waitForActiveShards(targetIndex.waitForActiveShards()) @@ -251,7 +251,7 @@ static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest( } @Override - protected String getMasterActionName(DiscoveryNode node) { - return super.getMasterActionName(node); + protected String getClusterManagerActionName(DiscoveryNode node) { + return super.getClusterManagerActionName(node); } } diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteComponentTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteComponentTemplateAction.java index 78cd4b7bc19c1..38c0ce1b7faf8 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteComponentTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteComponentTemplateAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionRequestValidationException; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteComposableIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteComposableIndexTemplateAction.java index 388e3d8f80748..a91f89f55420e 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteComposableIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteComposableIndexTemplateAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionRequestValidationException; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteIndexTemplateAction.java index 5773fcf93c49e..789d03f8e8d8c 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteIndexTemplateAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.template.delete; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Transport action for deleting an index template diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteIndexTemplateRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteIndexTemplateRequestBuilder.java index 8f272a98d57a0..036272ea0d5da 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteIndexTemplateRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/DeleteIndexTemplateRequestBuilder.java @@ -31,7 +31,7 @@ package org.opensearch.action.admin.indices.template.delete; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.ClusterManagerNodeOperationRequestBuilder; import org.opensearch.client.OpenSearchClient; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteComponentTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteComponentTemplateAction.java index cf481480d6806..75cc8ffe05f73 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteComponentTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteComponentTemplateAction.java @@ -36,7 +36,7 @@ import org.apache.logging.log4j.Logger; import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -102,11 +102,11 @@ protected ClusterBlockException checkBlock(DeleteComponentTemplateAction.Request } @Override - protected void masterOperation( + protected void clusterManagerOperation( final DeleteComponentTemplateAction.Request request, final ClusterState state, final ActionListener listener ) { - indexTemplateService.removeComponentTemplate(request.name(), request.masterNodeTimeout(), listener); + indexTemplateService.removeComponentTemplate(request.name(), request.clusterManagerNodeTimeout(), listener); } } diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteComposableIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteComposableIndexTemplateAction.java index 44a30189d8252..52464dbd90e3f 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteComposableIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteComposableIndexTemplateAction.java @@ -36,7 +36,7 @@ import org.apache.logging.log4j.Logger; import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -102,11 +102,11 @@ protected ClusterBlockException checkBlock(DeleteComposableIndexTemplateAction.R } @Override - protected void masterOperation( + protected void clusterManagerOperation( final DeleteComposableIndexTemplateAction.Request request, final ClusterState state, final ActionListener listener ) { - indexTemplateService.removeIndexTemplateV2(request.name(), request.masterNodeTimeout(), listener); + indexTemplateService.removeIndexTemplateV2(request.name(), request.clusterManagerNodeTimeout(), listener); } } diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteIndexTemplateAction.java index 08fc6e4f17d5c..4459f6b09a913 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/delete/TransportDeleteIndexTemplateAction.java @@ -36,7 +36,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -102,13 +102,13 @@ protected ClusterBlockException checkBlock(DeleteIndexTemplateRequest request, C } @Override - protected void masterOperation( + protected void clusterManagerOperation( final DeleteIndexTemplateRequest request, final ClusterState state, final ActionListener listener ) { indexTemplateService.removeTemplates( - new MetadataIndexTemplateService.RemoveRequest(request.name()).masterTimeout(request.masterNodeTimeout()), + new MetadataIndexTemplateService.RemoveRequest(request.name()).masterTimeout(request.clusterManagerNodeTimeout()), new MetadataIndexTemplateService.RemoveListener() { @Override public void onResponse(MetadataIndexTemplateService.RemoveResponse response) { diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComponentTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComponentTemplateAction.java index c6016ad78a681..36e3a1d0e6264 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComponentTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComponentTemplateAction.java @@ -96,7 +96,7 @@ protected ClusterBlockException checkBlock(GetComponentTemplateAction.Request re } @Override - protected void masterOperation( + protected void clusterManagerOperation( GetComponentTemplateAction.Request request, ClusterState state, ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComposableIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComposableIndexTemplateAction.java index 405dc7afc769f..327a40be64a2a 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComposableIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComposableIndexTemplateAction.java @@ -96,7 +96,7 @@ protected ClusterBlockException checkBlock(GetComposableIndexTemplateAction.Requ } @Override - protected void masterOperation( + protected void clusterManagerOperation( GetComposableIndexTemplateAction.Request request, ClusterState state, ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java index 44969022c9e06..d74ff9e309842 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java @@ -96,7 +96,7 @@ protected ClusterBlockException checkBlock(GetIndexTemplatesRequest request, Clu } @Override - protected void masterOperation( + protected void clusterManagerOperation( GetIndexTemplatesRequest request, ClusterState state, ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateIndexTemplateAction.java index 51a634e876886..ee2a049d5f4a5 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateIndexTemplateAction.java @@ -127,7 +127,7 @@ protected SimulateIndexTemplateResponse read(StreamInput in) throws IOException } @Override - protected void masterOperation( + protected void clusterManagerOperation( SimulateIndexTemplateRequest request, ClusterState state, ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateTemplateAction.java index 5b7395b3bc3a1..71ded5687ac72 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateTemplateAction.java @@ -113,7 +113,7 @@ protected SimulateIndexTemplateResponse read(StreamInput in) throws IOException } @Override - protected void masterOperation( + protected void clusterManagerOperation( SimulateTemplateAction.Request request, ClusterState state, ActionListener listener diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutComponentTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutComponentTemplateAction.java index 4df98a57b01f1..f94e79b685bbc 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutComponentTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutComponentTemplateAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionRequestValidationException; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; import org.opensearch.cluster.metadata.ComponentTemplate; import org.opensearch.common.Nullable; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutComposableIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutComposableIndexTemplateAction.java index 1facbc137a754..7b05aaa0da711 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutComposableIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutComposableIndexTemplateAction.java @@ -36,7 +36,7 @@ import org.opensearch.action.ActionType; import org.opensearch.action.IndicesRequest; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.metadata.ComposableIndexTemplate; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutIndexTemplateAction.java index eb21b81350fda..06a9f6fbba409 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutIndexTemplateAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.template.put; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * An action for putting an index template into the cluster state diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java index 42ff1fb2aab4c..0487ed3e690be 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java @@ -32,7 +32,7 @@ package org.opensearch.action.admin.indices.template.put; import org.opensearch.action.admin.indices.alias.Alias; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.ClusterManagerNodeOperationRequestBuilder; import org.opensearch.client.OpenSearchClient; import org.opensearch.common.bytes.BytesReference; diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutComponentTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutComponentTemplateAction.java index 4d63b338d999d..925913c4e8d3e 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutComponentTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutComponentTemplateAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -106,7 +106,7 @@ protected ClusterBlockException checkBlock(PutComponentTemplateAction.Request re } @Override - protected void masterOperation( + protected void clusterManagerOperation( final PutComponentTemplateAction.Request request, final ClusterState state, final ActionListener listener @@ -125,7 +125,7 @@ protected void masterOperation( request.cause(), request.create(), request.name(), - request.masterNodeTimeout(), + request.clusterManagerNodeTimeout(), componentTemplate, listener ); diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutComposableIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutComposableIndexTemplateAction.java index 73039c85596a8..20ba5376f1add 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutComposableIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutComposableIndexTemplateAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -99,7 +99,7 @@ protected ClusterBlockException checkBlock(PutComposableIndexTemplateAction.Requ } @Override - protected void masterOperation( + protected void clusterManagerOperation( final PutComposableIndexTemplateAction.Request request, final ClusterState state, final ActionListener listener @@ -109,7 +109,7 @@ protected void masterOperation( request.cause(), request.create(), request.name(), - request.masterNodeTimeout(), + request.clusterManagerNodeTimeout(), indexTemplate, listener ); diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java index fb7696e207ca2..49d1345f52f55 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java @@ -36,7 +36,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -106,7 +106,7 @@ protected ClusterBlockException checkBlock(PutIndexTemplateRequest request, Clus } @Override - protected void masterOperation( + protected void clusterManagerOperation( final PutIndexTemplateRequest request, final ClusterState state, final ActionListener listener @@ -125,7 +125,7 @@ protected void masterOperation( .mappings(request.mappings()) .aliases(request.aliases()) .create(request.create()) - .masterTimeout(request.masterNodeTimeout()) + .masterTimeout(request.clusterManagerNodeTimeout()) .version(request.version()), new MetadataIndexTemplateService.PutListener() { diff --git a/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/TransportUpgradeSettingsAction.java b/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/TransportUpgradeSettingsAction.java index 1faec4330e16e..df0d5cf57e7de 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/TransportUpgradeSettingsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/TransportUpgradeSettingsAction.java @@ -37,7 +37,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.ack.ClusterStateUpdateResponse; @@ -102,14 +102,14 @@ protected AcknowledgedResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation( + protected void clusterManagerOperation( final UpgradeSettingsRequest request, final ClusterState state, final ActionListener listener ) { UpgradeSettingsClusterStateUpdateRequest clusterStateUpdateRequest = new UpgradeSettingsClusterStateUpdateRequest().ackTimeout( request.timeout() - ).versions(request.versions()).masterNodeTimeout(request.masterNodeTimeout()); + ).versions(request.versions()).masterNodeTimeout(request.clusterManagerNodeTimeout()); updateSettingsService.upgradeIndexSettings(clusterStateUpdateRequest, new ActionListener() { @Override diff --git a/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/UpgradeSettingsAction.java b/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/UpgradeSettingsAction.java index 4c42b4abbf678..05944e781d109 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/UpgradeSettingsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/UpgradeSettingsAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.admin.indices.upgrade.post; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Transport action for upgrading index settings diff --git a/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/UpgradeSettingsRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/UpgradeSettingsRequest.java index 0fe8e83e30258..d6b784e44befb 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/UpgradeSettingsRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/upgrade/post/UpgradeSettingsRequest.java @@ -34,7 +34,7 @@ import org.opensearch.Version; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.collect.Tuple; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/bulk/TransportBulkAction.java b/server/src/main/java/org/opensearch/action/bulk/TransportBulkAction.java index 1fabc3b1a7ea8..de285983b846b 100644 --- a/server/src/main/java/org/opensearch/action/bulk/TransportBulkAction.java +++ b/server/src/main/java/org/opensearch/action/bulk/TransportBulkAction.java @@ -454,7 +454,7 @@ void createIndex(String index, TimeValue timeout, Version minNodeVersion, Action CreateIndexRequest createIndexRequest = new CreateIndexRequest(); createIndexRequest.index(index); createIndexRequest.cause("auto(bulk api)"); - createIndexRequest.masterNodeTimeout(timeout); + createIndexRequest.clusterManagerNodeTimeout(timeout); if (minNodeVersion.onOrAfter(LegacyESVersion.V_7_8_0)) { client.execute(AutoCreateAction.INSTANCE, createIndexRequest, listener); } else { diff --git a/server/src/main/java/org/opensearch/action/ingest/DeletePipelineAction.java b/server/src/main/java/org/opensearch/action/ingest/DeletePipelineAction.java index 82bb78a9b89d6..6017be9747912 100644 --- a/server/src/main/java/org/opensearch/action/ingest/DeletePipelineAction.java +++ b/server/src/main/java/org/opensearch/action/ingest/DeletePipelineAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.ingest; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Transport action to delete a pipeline diff --git a/server/src/main/java/org/opensearch/action/ingest/DeletePipelineRequest.java b/server/src/main/java/org/opensearch/action/ingest/DeletePipelineRequest.java index 8e770d49d6771..0bd102849eee8 100644 --- a/server/src/main/java/org/opensearch/action/ingest/DeletePipelineRequest.java +++ b/server/src/main/java/org/opensearch/action/ingest/DeletePipelineRequest.java @@ -33,7 +33,7 @@ package org.opensearch.action.ingest; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/ingest/DeletePipelineRequestBuilder.java b/server/src/main/java/org/opensearch/action/ingest/DeletePipelineRequestBuilder.java index d26f0ba509ec8..6a2eb494e8d3f 100644 --- a/server/src/main/java/org/opensearch/action/ingest/DeletePipelineRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/ingest/DeletePipelineRequestBuilder.java @@ -33,7 +33,7 @@ package org.opensearch.action.ingest; import org.opensearch.action.ActionRequestBuilder; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchClient; /** diff --git a/server/src/main/java/org/opensearch/action/ingest/DeletePipelineTransportAction.java b/server/src/main/java/org/opensearch/action/ingest/DeletePipelineTransportAction.java index a490c68401466..9085b2347765c 100644 --- a/server/src/main/java/org/opensearch/action/ingest/DeletePipelineTransportAction.java +++ b/server/src/main/java/org/opensearch/action/ingest/DeletePipelineTransportAction.java @@ -34,7 +34,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; @@ -88,7 +88,7 @@ protected AcknowledgedResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation(DeletePipelineRequest request, ClusterState state, ActionListener listener) + protected void clusterManagerOperation(DeletePipelineRequest request, ClusterState state, ActionListener listener) throws Exception { ingestService.delete(request, listener); } diff --git a/server/src/main/java/org/opensearch/action/ingest/GetPipelineTransportAction.java b/server/src/main/java/org/opensearch/action/ingest/GetPipelineTransportAction.java index 3a5493bfa4b36..5a59c8255361e 100644 --- a/server/src/main/java/org/opensearch/action/ingest/GetPipelineTransportAction.java +++ b/server/src/main/java/org/opensearch/action/ingest/GetPipelineTransportAction.java @@ -85,7 +85,7 @@ protected GetPipelineResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation(GetPipelineRequest request, ClusterState state, ActionListener listener) + protected void clusterManagerOperation(GetPipelineRequest request, ClusterState state, ActionListener listener) throws Exception { listener.onResponse(new GetPipelineResponse(IngestService.getPipelines(state, request.getIds()))); } diff --git a/server/src/main/java/org/opensearch/action/ingest/PutPipelineAction.java b/server/src/main/java/org/opensearch/action/ingest/PutPipelineAction.java index be47bff8f4e92..1fcbd783d246b 100644 --- a/server/src/main/java/org/opensearch/action/ingest/PutPipelineAction.java +++ b/server/src/main/java/org/opensearch/action/ingest/PutPipelineAction.java @@ -33,7 +33,7 @@ package org.opensearch.action.ingest; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; /** * Transport action to put a new pipeline diff --git a/server/src/main/java/org/opensearch/action/ingest/PutPipelineRequest.java b/server/src/main/java/org/opensearch/action/ingest/PutPipelineRequest.java index fcba2e720e8c6..d5fbaa46810f7 100644 --- a/server/src/main/java/org/opensearch/action/ingest/PutPipelineRequest.java +++ b/server/src/main/java/org/opensearch/action/ingest/PutPipelineRequest.java @@ -33,7 +33,7 @@ package org.opensearch.action.ingest; import org.opensearch.action.ActionRequestValidationException; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.common.bytes.BytesReference; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/main/java/org/opensearch/action/ingest/PutPipelineRequestBuilder.java b/server/src/main/java/org/opensearch/action/ingest/PutPipelineRequestBuilder.java index 57c29147f1176..fec2cdef089e4 100644 --- a/server/src/main/java/org/opensearch/action/ingest/PutPipelineRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/ingest/PutPipelineRequestBuilder.java @@ -33,7 +33,7 @@ package org.opensearch.action.ingest; import org.opensearch.action.ActionRequestBuilder; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.OpenSearchClient; import org.opensearch.common.bytes.BytesReference; import org.opensearch.common.xcontent.XContentType; diff --git a/server/src/main/java/org/opensearch/action/ingest/PutPipelineTransportAction.java b/server/src/main/java/org/opensearch/action/ingest/PutPipelineTransportAction.java index c294321d39c43..61a2deedfd511 100644 --- a/server/src/main/java/org/opensearch/action/ingest/PutPipelineTransportAction.java +++ b/server/src/main/java/org/opensearch/action/ingest/PutPipelineTransportAction.java @@ -36,7 +36,7 @@ import org.opensearch.action.admin.cluster.node.info.NodeInfo; import org.opensearch.action.admin.cluster.node.info.NodesInfoRequest; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; import org.opensearch.client.OriginSettingClient; import org.opensearch.client.node.NodeClient; @@ -103,7 +103,7 @@ protected AcknowledgedResponse read(StreamInput in) throws IOException { } @Override - protected void masterOperation(PutPipelineRequest request, ClusterState state, ActionListener listener) + protected void clusterManagerOperation(PutPipelineRequest request, ClusterState state, ActionListener listener) throws Exception { NodesInfoRequest nodesInfoRequest = new NodesInfoRequest(); nodesInfoRequest.clear().addMetric(NodesInfoRequest.Metric.INGEST.metricName()); diff --git a/server/src/main/java/org/opensearch/action/search/AbstractSearchAsyncAction.java b/server/src/main/java/org/opensearch/action/search/AbstractSearchAsyncAction.java index 1d6d3f284d546..1597b31e89871 100644 --- a/server/src/main/java/org/opensearch/action/search/AbstractSearchAsyncAction.java +++ b/server/src/main/java/org/opensearch/action/search/AbstractSearchAsyncAction.java @@ -456,7 +456,11 @@ private void onShardFailure(final int shardIndex, @Nullable SearchShardTarget sh } final int totalOps = this.totalOps.incrementAndGet(); if (totalOps == expectedTotalOps) { - onPhaseDone(); + try { + onPhaseDone(); + } catch (final Exception ex) { + onPhaseFailure(this, "The phase has failed", ex); + } } else if (totalOps > expectedTotalOps) { throw new AssertionError( "unexpected higher total ops [" + totalOps + "] compared to expected [" + expectedTotalOps + "]", @@ -561,7 +565,11 @@ private void successfulShardExecution(SearchShardIterator shardsIt) { } final int xTotalOps = totalOps.addAndGet(remainingOpsOnIterator); if (xTotalOps == expectedTotalOps) { - onPhaseDone(); + try { + onPhaseDone(); + } catch (final Exception ex) { + onPhaseFailure(this, "The phase has failed", ex); + } } else if (xTotalOps > expectedTotalOps) { throw new AssertionError( "unexpected higher total ops [" + xTotalOps + "] compared to expected [" + expectedTotalOps + "]", diff --git a/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java b/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java index ebb0f21d6fe16..1ca477942cdf6 100644 --- a/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java +++ b/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java @@ -65,6 +65,7 @@ import org.opensearch.common.settings.Setting; import org.opensearch.common.settings.Setting.Property; import org.opensearch.common.unit.TimeValue; +import org.opensearch.common.util.concurrent.AtomicArray; import org.opensearch.common.util.concurrent.CountDown; import org.opensearch.index.Index; import org.opensearch.index.query.Rewriteable; @@ -297,6 +298,81 @@ void executeOnShardTarget( ); } + public void executeRequest( + Task task, + SearchRequest searchRequest, + String actionName, + boolean includeSearchContext, + SinglePhaseSearchAction phaseSearchAction, + ActionListener listener + ) { + executeRequest(task, searchRequest, new SearchAsyncActionProvider() { + @Override + public AbstractSearchAsyncAction asyncSearchAction( + SearchTask task, + SearchRequest searchRequest, + Executor executor, + GroupShardsIterator shardsIts, + SearchTimeProvider timeProvider, + BiFunction connectionLookup, + ClusterState clusterState, + Map aliasFilter, + Map concreteIndexBoosts, + Map> indexRoutings, + ActionListener listener, + boolean preFilter, + ThreadPool threadPool, + SearchResponse.Clusters clusters + ) { + return new AbstractSearchAsyncAction( + actionName, + logger, + searchTransportService, + connectionLookup, + aliasFilter, + concreteIndexBoosts, + indexRoutings, + executor, + searchRequest, + listener, + shardsIts, + timeProvider, + clusterState, + task, + new ArraySearchPhaseResults<>(shardsIts.size()), + searchRequest.getMaxConcurrentShardRequests(), + clusters + ) { + @Override + protected void executePhaseOnShard( + SearchShardIterator shardIt, + SearchShardTarget shard, + SearchActionListener listener + ) { + final Transport.Connection connection = getConnection(shard.getClusterAlias(), shard.getNodeId()); + phaseSearchAction.executeOnShardTarget(task, shard, connection, listener); + } + + @Override + protected SearchPhase getNextPhase(SearchPhaseResults results, SearchPhaseContext context) { + return new SearchPhase(getName()) { + @Override + public void run() { + final AtomicArray atomicArray = results.getAtomicArray(); + sendSearchResponse(InternalSearchResponse.empty(), atomicArray); + } + }; + } + + @Override + boolean buildPointInTimeFromSearchResults() { + return includeSearchContext; + } + }; + } + }, listener); + } + private void executeRequest( Task task, SearchRequest searchRequest, diff --git a/server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedRequest.java b/server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedRequest.java deleted file mode 100644 index b67356d2567b5..0000000000000 --- a/server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedRequest.java +++ /dev/null @@ -1,105 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -/* - * Modifications Copyright OpenSearch Contributors. See - * GitHub history for details. - */ - -package org.opensearch.action.support.clustermanager; - -import org.opensearch.cluster.ack.AckedRequest; -import org.opensearch.common.io.stream.StreamInput; -import org.opensearch.common.io.stream.StreamOutput; -import org.opensearch.common.unit.TimeValue; - -import java.io.IOException; - -import static org.opensearch.common.unit.TimeValue.timeValueSeconds; - -/** - * Abstract class that allows to mark action requests that support acknowledgements. - * Facilitates consistency across different api. - * - * @opensearch.internal - */ -public abstract class AcknowledgedRequest> extends ClusterManagerNodeRequest - implements - AckedRequest { - - public static final TimeValue DEFAULT_ACK_TIMEOUT = timeValueSeconds(30); - - protected TimeValue timeout = DEFAULT_ACK_TIMEOUT; - - protected AcknowledgedRequest() {} - - protected AcknowledgedRequest(StreamInput in) throws IOException { - super(in); - this.timeout = in.readTimeValue(); - } - - /** - * Allows to set the timeout - * @param timeout timeout as a string (e.g. 1s) - * @return the request itself - */ - @SuppressWarnings("unchecked") - public final Request timeout(String timeout) { - this.timeout = TimeValue.parseTimeValue(timeout, this.timeout, getClass().getSimpleName() + ".timeout"); - return (Request) this; - } - - /** - * Allows to set the timeout - * @param timeout timeout as a {@link TimeValue} - * @return the request itself - */ - @SuppressWarnings("unchecked") - public final Request timeout(TimeValue timeout) { - this.timeout = timeout; - return (Request) this; - } - - /** - * Returns the current timeout - * @return the current timeout as a {@link TimeValue} - */ - public final TimeValue timeout() { - return timeout; - } - - @Override - public TimeValue ackTimeout() { - return timeout; - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeTimeValue(timeout); - } - -} diff --git a/server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedRequestBuilder.java b/server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedRequestBuilder.java deleted file mode 100644 index fa957f159ec9d..0000000000000 --- a/server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedRequestBuilder.java +++ /dev/null @@ -1,73 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -/* - * Modifications Copyright OpenSearch Contributors. See - * GitHub history for details. - */ - -package org.opensearch.action.support.clustermanager; - -import org.opensearch.action.ActionType; -import org.opensearch.client.OpenSearchClient; -import org.opensearch.common.unit.TimeValue; - -/** - * Base request builder for cluster-manager node operations that support acknowledgements - * - * @opensearch.internal - */ -public abstract class AcknowledgedRequestBuilder< - Request extends AcknowledgedRequest, - Response extends AcknowledgedResponse, - RequestBuilder extends AcknowledgedRequestBuilder> extends ClusterManagerNodeOperationRequestBuilder< - Request, - Response, - RequestBuilder> { - - protected AcknowledgedRequestBuilder(OpenSearchClient client, ActionType action, Request request) { - super(client, action, request); - } - - /** - * Sets the maximum wait for acknowledgement from other nodes - */ - @SuppressWarnings("unchecked") - public RequestBuilder setTimeout(TimeValue timeout) { - request.timeout(timeout); - return (RequestBuilder) this; - } - - /** - * Timeout to wait for the operation to be acknowledged by current cluster nodes. Defaults - * to {@code 10s}. - */ - @SuppressWarnings("unchecked") - public RequestBuilder setTimeout(String timeout) { - request.timeout(timeout); - return (RequestBuilder) this; - } -} diff --git a/server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedResponse.java b/server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedResponse.java deleted file mode 100644 index 1db116ffaf74a..0000000000000 --- a/server/src/main/java/org/opensearch/action/support/clustermanager/AcknowledgedResponse.java +++ /dev/null @@ -1,149 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -/* - * Modifications Copyright OpenSearch Contributors. See - * GitHub history for details. - */ - -package org.opensearch.action.support.clustermanager; - -import org.opensearch.action.ActionResponse; -import org.opensearch.common.ParseField; -import org.opensearch.common.io.stream.StreamInput; -import org.opensearch.common.io.stream.StreamOutput; -import org.opensearch.common.xcontent.ConstructingObjectParser; -import org.opensearch.common.xcontent.ObjectParser; -import org.opensearch.common.xcontent.ToXContentObject; -import org.opensearch.common.xcontent.XContentBuilder; -import org.opensearch.common.xcontent.XContentParser; - -import java.io.IOException; -import java.util.Objects; - -import static org.opensearch.common.xcontent.ConstructingObjectParser.constructorArg; - -/** - * A response that indicates that a request has been acknowledged - * - * @opensearch.internal - */ -public class AcknowledgedResponse extends ActionResponse implements ToXContentObject { - - private static final ParseField ACKNOWLEDGED = new ParseField("acknowledged"); - - protected static void declareAcknowledgedField(ConstructingObjectParser objectParser) { - objectParser.declareField( - constructorArg(), - (parser, context) -> parser.booleanValue(), - ACKNOWLEDGED, - ObjectParser.ValueType.BOOLEAN - ); - } - - protected boolean acknowledged; - - public AcknowledgedResponse(StreamInput in) throws IOException { - super(in); - acknowledged = in.readBoolean(); - } - - public AcknowledgedResponse(StreamInput in, boolean readAcknowledged) throws IOException { - super(in); - if (readAcknowledged) { - acknowledged = in.readBoolean(); - } - } - - public AcknowledgedResponse(boolean acknowledged) { - this.acknowledged = acknowledged; - } - - /** - * Returns whether the response is acknowledged or not - * @return true if the response is acknowledged, false otherwise - */ - public final boolean isAcknowledged() { - return acknowledged; - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - out.writeBoolean(acknowledged); - } - - @Override - public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(); - builder.field(ACKNOWLEDGED.getPreferredName(), isAcknowledged()); - addCustomFields(builder, params); - builder.endObject(); - return builder; - } - - protected void addCustomFields(XContentBuilder builder, Params params) throws IOException { - - } - - /** - * A generic parser that simply parses the acknowledged flag - */ - private static final ConstructingObjectParser ACKNOWLEDGED_FLAG_PARSER = new ConstructingObjectParser<>( - "acknowledged_flag", - true, - args -> (Boolean) args[0] - ); - - static { - ACKNOWLEDGED_FLAG_PARSER.declareField( - constructorArg(), - (parser, context) -> parser.booleanValue(), - ACKNOWLEDGED, - ObjectParser.ValueType.BOOLEAN - ); - } - - public static AcknowledgedResponse fromXContent(XContentParser parser) throws IOException { - return new AcknowledgedResponse(ACKNOWLEDGED_FLAG_PARSER.apply(parser, null)); - } - - @Override - public boolean equals(Object o) { - if (this == o) { - return true; - } - if (o == null || getClass() != o.getClass()) { - return false; - } - AcknowledgedResponse that = (AcknowledgedResponse) o; - return isAcknowledged() == that.isAcknowledged(); - } - - @Override - public int hashCode() { - return Objects.hash(isAcknowledged()); - } -} diff --git a/server/src/main/java/org/opensearch/action/support/clustermanager/ClusterManagerNodeOperationRequestBuilder.java b/server/src/main/java/org/opensearch/action/support/clustermanager/ClusterManagerNodeOperationRequestBuilder.java index 6d8509a0671f2..4f60a75c5dd22 100644 --- a/server/src/main/java/org/opensearch/action/support/clustermanager/ClusterManagerNodeOperationRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/support/clustermanager/ClusterManagerNodeOperationRequestBuilder.java @@ -58,18 +58,39 @@ protected ClusterManagerNodeOperationRequestBuilder(OpenSearchClient client, Act * Sets the cluster-manager node timeout in case the cluster-manager has not yet been discovered. */ @SuppressWarnings("unchecked") - public final RequestBuilder setMasterNodeTimeout(TimeValue timeout) { - request.masterNodeTimeout(timeout); + public final RequestBuilder setClusterManagerNodeTimeout(TimeValue timeout) { + request.clusterManagerNodeTimeout(timeout); return (RequestBuilder) this; } /** * Sets the cluster-manager node timeout in case the cluster-manager has not yet been discovered. + * + * @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #setClusterManagerNodeTimeout(TimeValue)} */ @SuppressWarnings("unchecked") - public final RequestBuilder setMasterNodeTimeout(String timeout) { - request.masterNodeTimeout(timeout); + @Deprecated + public final RequestBuilder setMasterNodeTimeout(TimeValue timeout) { + return setClusterManagerNodeTimeout(timeout); + } + + /** + * Sets the cluster-manager node timeout in case the cluster-manager has not yet been discovered. + */ + @SuppressWarnings("unchecked") + public final RequestBuilder setClusterManagerNodeTimeout(String timeout) { + request.clusterManagerNodeTimeout(timeout); return (RequestBuilder) this; } + /** + * Sets the cluster-manager node timeout in case the cluster-manager has not yet been discovered. + * + * @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #setClusterManagerNodeTimeout(String)} + */ + @SuppressWarnings("unchecked") + @Deprecated + public final RequestBuilder setMasterNodeTimeout(String timeout) { + return setClusterManagerNodeTimeout(timeout); + } } diff --git a/server/src/main/java/org/opensearch/action/support/clustermanager/ClusterManagerNodeRequest.java b/server/src/main/java/org/opensearch/action/support/clustermanager/ClusterManagerNodeRequest.java index 9cce7562a988b..9d8a79cfed11d 100644 --- a/server/src/main/java/org/opensearch/action/support/clustermanager/ClusterManagerNodeRequest.java +++ b/server/src/main/java/org/opensearch/action/support/clustermanager/ClusterManagerNodeRequest.java @@ -46,40 +46,77 @@ */ public abstract class ClusterManagerNodeRequest> extends ActionRequest { - public static final TimeValue DEFAULT_MASTER_NODE_TIMEOUT = TimeValue.timeValueSeconds(30); + public static final TimeValue DEFAULT_CLUSTER_MANAGER_NODE_TIMEOUT = TimeValue.timeValueSeconds(30); - protected TimeValue masterNodeTimeout = DEFAULT_MASTER_NODE_TIMEOUT; + /** @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #DEFAULT_CLUSTER_MANAGER_NODE_TIMEOUT} */ + @Deprecated + public static final TimeValue DEFAULT_MASTER_NODE_TIMEOUT = DEFAULT_CLUSTER_MANAGER_NODE_TIMEOUT; + + protected TimeValue clusterManagerNodeTimeout = DEFAULT_CLUSTER_MANAGER_NODE_TIMEOUT; + + /** @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #clusterManagerNodeTimeout} */ + @Deprecated + protected TimeValue masterNodeTimeout = clusterManagerNodeTimeout; protected ClusterManagerNodeRequest() {} protected ClusterManagerNodeRequest(StreamInput in) throws IOException { super(in); - masterNodeTimeout = in.readTimeValue(); + clusterManagerNodeTimeout = in.readTimeValue(); } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeTimeValue(masterNodeTimeout); + out.writeTimeValue(clusterManagerNodeTimeout); } /** * A timeout value in case the cluster-manager has not been discovered yet or disconnected. */ @SuppressWarnings("unchecked") - public final Request masterNodeTimeout(TimeValue timeout) { - this.masterNodeTimeout = timeout; + public final Request clusterManagerNodeTimeout(TimeValue timeout) { + this.clusterManagerNodeTimeout = timeout; return (Request) this; } /** * A timeout value in case the cluster-manager has not been discovered yet or disconnected. + * + * @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #clusterManagerNodeTimeout(TimeValue)} */ + @SuppressWarnings("unchecked") + @Deprecated + public final Request masterNodeTimeout(TimeValue timeout) { + return clusterManagerNodeTimeout(timeout); + } + + /** + * A timeout value in case the cluster-manager has not been discovered yet or disconnected. + */ + public final Request clusterManagerNodeTimeout(String timeout) { + return clusterManagerNodeTimeout( + TimeValue.parseTimeValue(timeout, null, getClass().getSimpleName() + ".clusterManagerNodeTimeout") + ); + } + + /** + * A timeout value in case the cluster-manager has not been discovered yet or disconnected. + * + * @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #clusterManagerNodeTimeout(String)} + */ + @Deprecated public final Request masterNodeTimeout(String timeout) { - return masterNodeTimeout(TimeValue.parseTimeValue(timeout, null, getClass().getSimpleName() + ".masterNodeTimeout")); + return clusterManagerNodeTimeout(timeout); + } + + public final TimeValue clusterManagerNodeTimeout() { + return this.clusterManagerNodeTimeout; } + /** @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #clusterManagerNodeTimeout()} */ + @Deprecated public final TimeValue masterNodeTimeout() { - return this.masterNodeTimeout; + return clusterManagerNodeTimeout(); } } diff --git a/server/src/main/java/org/opensearch/action/support/clustermanager/ShardsAcknowledgedResponse.java b/server/src/main/java/org/opensearch/action/support/clustermanager/ShardsAcknowledgedResponse.java deleted file mode 100644 index dc24adcfa0ca1..0000000000000 --- a/server/src/main/java/org/opensearch/action/support/clustermanager/ShardsAcknowledgedResponse.java +++ /dev/null @@ -1,117 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -/* - * Modifications Copyright OpenSearch Contributors. See - * GitHub history for details. - */ - -package org.opensearch.action.support.clustermanager; - -import org.opensearch.common.ParseField; -import org.opensearch.common.io.stream.StreamInput; -import org.opensearch.common.io.stream.StreamOutput; -import org.opensearch.common.xcontent.ConstructingObjectParser; -import org.opensearch.common.xcontent.ObjectParser; -import org.opensearch.common.xcontent.XContentBuilder; - -import java.io.IOException; -import java.util.Objects; - -import static org.opensearch.common.xcontent.ConstructingObjectParser.constructorArg; - -/** - * Transport response for shard acknowledgements - * - * @opensearch.internal - */ -public abstract class ShardsAcknowledgedResponse extends AcknowledgedResponse { - - protected static final ParseField SHARDS_ACKNOWLEDGED = new ParseField("shards_acknowledged"); - - protected static void declareAcknowledgedAndShardsAcknowledgedFields( - ConstructingObjectParser objectParser - ) { - declareAcknowledgedField(objectParser); - objectParser.declareField( - constructorArg(), - (parser, context) -> parser.booleanValue(), - SHARDS_ACKNOWLEDGED, - ObjectParser.ValueType.BOOLEAN - ); - } - - private final boolean shardsAcknowledged; - - protected ShardsAcknowledgedResponse(StreamInput in, boolean readShardsAcknowledged) throws IOException { - super(in); - if (readShardsAcknowledged) { - this.shardsAcknowledged = in.readBoolean(); - } else { - this.shardsAcknowledged = false; - } - } - - protected ShardsAcknowledgedResponse(boolean acknowledged, boolean shardsAcknowledged) { - super(acknowledged); - assert acknowledged || shardsAcknowledged == false; // if it's not acknowledged, then shards acked should be false too - this.shardsAcknowledged = shardsAcknowledged; - } - - /** - * Returns true if the requisite number of shards were started before - * returning from the index creation operation. If {@link #isAcknowledged()} - * is false, then this also returns false. - */ - public boolean isShardsAcknowledged() { - return shardsAcknowledged; - } - - protected void writeShardsAcknowledged(StreamOutput out) throws IOException { - out.writeBoolean(shardsAcknowledged); - } - - @Override - protected void addCustomFields(XContentBuilder builder, Params params) throws IOException { - builder.field(SHARDS_ACKNOWLEDGED.getPreferredName(), isShardsAcknowledged()); - } - - @Override - public boolean equals(Object o) { - if (super.equals(o)) { - ShardsAcknowledgedResponse that = (ShardsAcknowledgedResponse) o; - return isShardsAcknowledged() == that.isShardsAcknowledged(); - } - return false; - } - - @Override - public int hashCode() { - return Objects.hash(super.hashCode(), isShardsAcknowledged()); - } - -} diff --git a/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeAction.java b/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeAction.java index 507a019390ff9..f0169c2e19ac7 100644 --- a/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeAction.java +++ b/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeAction.java @@ -117,13 +117,32 @@ protected TransportClusterManagerNodeAction( protected abstract Response read(StreamInput in) throws IOException; - protected abstract void masterOperation(Request request, ClusterState state, ActionListener listener) throws Exception; + protected abstract void clusterManagerOperation(Request request, ClusterState state, ActionListener listener) + throws Exception; + + // Change the method to be concrete after deprecation so that existing class can override it while new class don't have to. + /** @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #clusterManagerOperation(ClusterManagerNodeRequest, ClusterState, ActionListener)} */ + @Deprecated + protected void masterOperation(Request request, ClusterState state, ActionListener listener) throws Exception { + clusterManagerOperation(request, state, listener); + }; + + /** + * Override this operation if access to the task parameter is needed + */ + protected void clusterManagerOperation(Task task, Request request, ClusterState state, ActionListener listener) + throws Exception { + clusterManagerOperation(request, state, listener); + } /** * Override this operation if access to the task parameter is needed + * + * @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #clusterManagerOperation(Task, ClusterManagerNodeRequest, ClusterState, ActionListener)} */ + @Deprecated protected void masterOperation(Task task, Request request, ClusterState state, ActionListener listener) throws Exception { - masterOperation(request, state, listener); + clusterManagerOperation(task, request, state, listener); } protected boolean localExecute(Request request) { @@ -201,7 +220,7 @@ protected void doStart(ClusterState clusterState) { } }); threadPool.executor(executor) - .execute(ActionRunnable.wrap(delegate, l -> masterOperation(task, request, clusterState, l))); + .execute(ActionRunnable.wrap(delegate, l -> clusterManagerOperation(task, request, clusterState, l))); } } else { if (nodes.getMasterNode() == null) { @@ -209,7 +228,7 @@ protected void doStart(ClusterState clusterState) { retryOnMasterChange(clusterState, null); } else { DiscoveryNode clusterManagerNode = nodes.getMasterNode(); - final String actionName = getMasterActionName(clusterManagerNode); + final String actionName = getClusterManagerActionName(clusterManagerNode); transportService.sendRequest( clusterManagerNode, actionName, @@ -248,7 +267,8 @@ private void retryOnMasterChange(ClusterState state, Throwable failure) { private void retry(ClusterState state, final Throwable failure, final Predicate statePredicate) { if (observer == null) { - final long remainingTimeoutMS = request.masterNodeTimeout().millis() - (threadPool.relativeTimeInMillis() - startTime); + final long remainingTimeoutMS = request.clusterManagerNodeTimeout().millis() - (threadPool.relativeTimeInMillis() + - startTime); if (remainingTimeoutMS <= 0) { logger.debug(() -> new ParameterizedMessage("timed out before retrying [{}] after failure", actionName), failure); listener.onFailure(new MasterNotDiscoveredException(failure)); @@ -289,7 +309,18 @@ public void onTimeout(TimeValue timeout) { * Allows to conditionally return a different cluster-manager node action name in the case an action gets renamed. * This mainly for backwards compatibility should be used rarely */ - protected String getMasterActionName(DiscoveryNode node) { + protected String getClusterManagerActionName(DiscoveryNode node) { return actionName; } + + /** + * Allows to conditionally return a different cluster-manager node action name in the case an action gets renamed. + * This mainly for backwards compatibility should be used rarely + * + * @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #getClusterManagerActionName(DiscoveryNode)} + */ + @Deprecated + protected String getMasterActionName(DiscoveryNode node) { + return getClusterManagerActionName(node); + } } diff --git a/server/src/main/java/org/opensearch/action/support/clustermanager/info/TransportClusterInfoAction.java b/server/src/main/java/org/opensearch/action/support/clustermanager/info/TransportClusterInfoAction.java index caf89fc7b6c8e..7cebd2dac813b 100644 --- a/server/src/main/java/org/opensearch/action/support/clustermanager/info/TransportClusterInfoAction.java +++ b/server/src/main/java/org/opensearch/action/support/clustermanager/info/TransportClusterInfoAction.java @@ -77,15 +77,29 @@ protected ClusterBlockException checkBlock(Request request, ClusterState state) } @Override - protected final void masterOperation(final Request request, final ClusterState state, final ActionListener listener) { + protected final void clusterManagerOperation(final Request request, final ClusterState state, final ActionListener listener) { String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(state, request); - doMasterOperation(request, concreteIndices, state, listener); + doClusterManagerOperation(request, concreteIndices, state, listener); } - protected abstract void doMasterOperation( + /** @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #clusterManagerOperation(ClusterInfoRequest, ClusterState, ActionListener)} */ + @Deprecated + @Override + protected final void masterOperation(final Request request, final ClusterState state, final ActionListener listener) { + clusterManagerOperation(request, state, listener); + } + + protected abstract void doClusterManagerOperation( Request request, String[] concreteIndices, ClusterState state, ActionListener listener ); + + // Change the method to be concrete after deprecation so that existing class can override it while new class don't have to. + /** @deprecated As of 2.1, because supporting inclusive language, replaced by {@link #doClusterManagerOperation(ClusterInfoRequest, String[], ClusterState, ActionListener)} */ + @Deprecated + protected void doMasterOperation(Request request, String[] concreteIndices, ClusterState state, ActionListener listener) { + doClusterManagerOperation(request, concreteIndices, state, listener); + } } diff --git a/server/src/main/java/org/opensearch/action/support/master/AcknowledgedRequest.java b/server/src/main/java/org/opensearch/action/support/master/AcknowledgedRequest.java index 857f4dc26a111..7f665b4e658a1 100644 --- a/server/src/main/java/org/opensearch/action/support/master/AcknowledgedRequest.java +++ b/server/src/main/java/org/opensearch/action/support/master/AcknowledgedRequest.java @@ -31,25 +31,75 @@ package org.opensearch.action.support.master; -import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; +import org.opensearch.cluster.ack.AckedRequest; import org.opensearch.common.io.stream.StreamInput; +import org.opensearch.common.io.stream.StreamOutput; +import org.opensearch.common.unit.TimeValue; import java.io.IOException; +import static org.opensearch.common.unit.TimeValue.timeValueSeconds; + /** * Abstract class that allows to mark action requests that support acknowledgements. * Facilitates consistency across different api. * * @opensearch.internal */ -public abstract class AcknowledgedRequest> extends - org.opensearch.action.support.clustermanager.AcknowledgedRequest { +public abstract class AcknowledgedRequest> extends MasterNodeRequest + implements + AckedRequest { - protected AcknowledgedRequest() { - super(); - } + public static final TimeValue DEFAULT_ACK_TIMEOUT = timeValueSeconds(30); + + protected TimeValue timeout = DEFAULT_ACK_TIMEOUT; + + protected AcknowledgedRequest() {} protected AcknowledgedRequest(StreamInput in) throws IOException { super(in); + this.timeout = in.readTimeValue(); + } + + /** + * Allows to set the timeout + * @param timeout timeout as a string (e.g. 1s) + * @return the request itself + */ + @SuppressWarnings("unchecked") + public final Request timeout(String timeout) { + this.timeout = TimeValue.parseTimeValue(timeout, this.timeout, getClass().getSimpleName() + ".timeout"); + return (Request) this; + } + + /** + * Allows to set the timeout + * @param timeout timeout as a {@link TimeValue} + * @return the request itself + */ + @SuppressWarnings("unchecked") + public final Request timeout(TimeValue timeout) { + this.timeout = timeout; + return (Request) this; } + + /** + * Returns the current timeout + * @return the current timeout as a {@link TimeValue} + */ + public final TimeValue timeout() { + return timeout; + } + + @Override + public TimeValue ackTimeout() { + return timeout; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeTimeValue(timeout); + } + } diff --git a/server/src/main/java/org/opensearch/action/support/master/AcknowledgedRequestBuilder.java b/server/src/main/java/org/opensearch/action/support/master/AcknowledgedRequestBuilder.java index e247734691eca..7a0824c6d30ca 100644 --- a/server/src/main/java/org/opensearch/action/support/master/AcknowledgedRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/support/master/AcknowledgedRequestBuilder.java @@ -32,9 +32,8 @@ package org.opensearch.action.support.master; import org.opensearch.action.ActionType; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; import org.opensearch.client.OpenSearchClient; +import org.opensearch.common.unit.TimeValue; /** * Base request builder for cluster-manager node operations that support acknowledgements @@ -44,10 +43,31 @@ public abstract class AcknowledgedRequestBuilder< Request extends AcknowledgedRequest, Response extends AcknowledgedResponse, - RequestBuilder extends AcknowledgedRequestBuilder> extends - org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder { + RequestBuilder extends AcknowledgedRequestBuilder> extends MasterNodeOperationRequestBuilder< + Request, + Response, + RequestBuilder> { - protected AcknowledgedRequestBuilder(OpenSearchClient client, ActionType action, Request request) { + protected AcknowledgedRequestBuilder(OpenSearchClient client, ActionType action, Request request) { super(client, action, request); } + + /** + * Sets the maximum wait for acknowledgement from other nodes + */ + @SuppressWarnings("unchecked") + public RequestBuilder setTimeout(TimeValue timeout) { + request.timeout(timeout); + return (RequestBuilder) this; + } + + /** + * Timeout to wait for the operation to be acknowledged by current cluster nodes. Defaults + * to {@code 10s}. + */ + @SuppressWarnings("unchecked") + public RequestBuilder setTimeout(String timeout) { + request.timeout(timeout); + return (RequestBuilder) this; + } } diff --git a/server/src/main/java/org/opensearch/action/support/master/AcknowledgedResponse.java b/server/src/main/java/org/opensearch/action/support/master/AcknowledgedResponse.java index 86ae1c313a8e6..415e52b68e368 100644 --- a/server/src/main/java/org/opensearch/action/support/master/AcknowledgedResponse.java +++ b/server/src/main/java/org/opensearch/action/support/master/AcknowledgedResponse.java @@ -31,26 +31,119 @@ package org.opensearch.action.support.master; +import org.opensearch.action.ActionResponse; +import org.opensearch.common.ParseField; import org.opensearch.common.io.stream.StreamInput; +import org.opensearch.common.io.stream.StreamOutput; +import org.opensearch.common.xcontent.ConstructingObjectParser; +import org.opensearch.common.xcontent.ObjectParser; +import org.opensearch.common.xcontent.ToXContentObject; +import org.opensearch.common.xcontent.XContentBuilder; +import org.opensearch.common.xcontent.XContentParser; import java.io.IOException; +import java.util.Objects; + +import static org.opensearch.common.xcontent.ConstructingObjectParser.constructorArg; /** * A response that indicates that a request has been acknowledged * * @opensearch.internal */ -public class AcknowledgedResponse extends org.opensearch.action.support.clustermanager.AcknowledgedResponse { +public class AcknowledgedResponse extends ActionResponse implements ToXContentObject { + + private static final ParseField ACKNOWLEDGED = new ParseField("acknowledged"); + + protected static void declareAcknowledgedField(ConstructingObjectParser objectParser) { + objectParser.declareField( + constructorArg(), + (parser, context) -> parser.booleanValue(), + ACKNOWLEDGED, + ObjectParser.ValueType.BOOLEAN + ); + } + + protected boolean acknowledged; public AcknowledgedResponse(StreamInput in) throws IOException { super(in); + acknowledged = in.readBoolean(); } public AcknowledgedResponse(StreamInput in, boolean readAcknowledged) throws IOException { - super(in, readAcknowledged); + super(in); + if (readAcknowledged) { + acknowledged = in.readBoolean(); + } } public AcknowledgedResponse(boolean acknowledged) { - super(acknowledged); + this.acknowledged = acknowledged; + } + + /** + * Returns whether the response is acknowledged or not + * @return true if the response is acknowledged, false otherwise + */ + public final boolean isAcknowledged() { + return acknowledged; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeBoolean(acknowledged); + } + + @Override + public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(ACKNOWLEDGED.getPreferredName(), isAcknowledged()); + addCustomFields(builder, params); + builder.endObject(); + return builder; + } + + protected void addCustomFields(XContentBuilder builder, Params params) throws IOException { + + } + + /** + * A generic parser that simply parses the acknowledged flag + */ + private static final ConstructingObjectParser ACKNOWLEDGED_FLAG_PARSER = new ConstructingObjectParser<>( + "acknowledged_flag", + true, + args -> (Boolean) args[0] + ); + + static { + ACKNOWLEDGED_FLAG_PARSER.declareField( + constructorArg(), + (parser, context) -> parser.booleanValue(), + ACKNOWLEDGED, + ObjectParser.ValueType.BOOLEAN + ); + } + + public static AcknowledgedResponse fromXContent(XContentParser parser) throws IOException { + return new AcknowledgedResponse(ACKNOWLEDGED_FLAG_PARSER.apply(parser, null)); + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + AcknowledgedResponse that = (AcknowledgedResponse) o; + return isAcknowledged() == that.isAcknowledged(); + } + + @Override + public int hashCode() { + return Objects.hash(isAcknowledged()); } } diff --git a/server/src/main/java/org/opensearch/action/support/master/MasterNodeOperationRequestBuilder.java b/server/src/main/java/org/opensearch/action/support/master/MasterNodeOperationRequestBuilder.java index 9c96c45e11847..f6d475fc06171 100644 --- a/server/src/main/java/org/opensearch/action/support/master/MasterNodeOperationRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/support/master/MasterNodeOperationRequestBuilder.java @@ -35,7 +35,6 @@ import org.opensearch.action.ActionType; import org.opensearch.action.ActionResponse; import org.opensearch.action.support.clustermanager.ClusterManagerNodeOperationRequestBuilder; -import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; import org.opensearch.client.OpenSearchClient; /** @@ -46,7 +45,7 @@ */ @Deprecated public abstract class MasterNodeOperationRequestBuilder< - Request extends ClusterManagerNodeRequest, + Request extends MasterNodeRequest, Response extends ActionResponse, RequestBuilder extends MasterNodeOperationRequestBuilder> extends ClusterManagerNodeOperationRequestBuilder { diff --git a/server/src/main/java/org/opensearch/action/support/master/MasterNodeReadOperationRequestBuilder.java b/server/src/main/java/org/opensearch/action/support/master/MasterNodeReadOperationRequestBuilder.java index 2ac6b2ba05b4c..ae134bdeca3c2 100644 --- a/server/src/main/java/org/opensearch/action/support/master/MasterNodeReadOperationRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/support/master/MasterNodeReadOperationRequestBuilder.java @@ -34,9 +34,7 @@ import org.opensearch.action.ActionType; import org.opensearch.action.ActionResponse; -import org.opensearch.action.support.clustermanager.ClusterManagerNodeOperationRequestBuilder; import org.opensearch.action.support.clustermanager.ClusterManagerNodeReadOperationRequestBuilder; -import org.opensearch.action.support.clustermanager.ClusterManagerNodeReadRequest; import org.opensearch.client.OpenSearchClient; /** @@ -47,10 +45,10 @@ */ @Deprecated public abstract class MasterNodeReadOperationRequestBuilder< - Request extends ClusterManagerNodeReadRequest, + Request extends MasterNodeReadRequest, Response extends ActionResponse, - RequestBuilder extends ClusterManagerNodeReadOperationRequestBuilder> extends - ClusterManagerNodeOperationRequestBuilder { + RequestBuilder extends MasterNodeReadOperationRequestBuilder> extends + ClusterManagerNodeReadOperationRequestBuilder { protected MasterNodeReadOperationRequestBuilder(OpenSearchClient client, ActionType action, Request request) { super(client, action, request); diff --git a/server/src/main/java/org/opensearch/action/support/master/MasterNodeRequest.java b/server/src/main/java/org/opensearch/action/support/master/MasterNodeRequest.java index ca72e29913326..fb86742186c9c 100644 --- a/server/src/main/java/org/opensearch/action/support/master/MasterNodeRequest.java +++ b/server/src/main/java/org/opensearch/action/support/master/MasterNodeRequest.java @@ -46,6 +46,8 @@ @Deprecated public abstract class MasterNodeRequest> extends ClusterManagerNodeRequest { + protected MasterNodeRequest() {} + protected MasterNodeRequest(StreamInput in) throws IOException { super(in); } diff --git a/server/src/main/java/org/opensearch/action/support/master/ShardsAcknowledgedResponse.java b/server/src/main/java/org/opensearch/action/support/master/ShardsAcknowledgedResponse.java index ac22c0d4eb542..d100874296844 100644 --- a/server/src/main/java/org/opensearch/action/support/master/ShardsAcknowledgedResponse.java +++ b/server/src/main/java/org/opensearch/action/support/master/ShardsAcknowledgedResponse.java @@ -32,19 +32,86 @@ package org.opensearch.action.support.master; +import org.opensearch.common.ParseField; import org.opensearch.common.io.stream.StreamInput; +import org.opensearch.common.io.stream.StreamOutput; +import org.opensearch.common.xcontent.ConstructingObjectParser; +import org.opensearch.common.xcontent.ObjectParser; +import org.opensearch.common.xcontent.XContentBuilder; import java.io.IOException; +import java.util.Objects; + +import static org.opensearch.common.xcontent.ConstructingObjectParser.constructorArg; /** * Transport response for shard acknowledgements * * @opensearch.internal */ -public abstract class ShardsAcknowledgedResponse extends org.opensearch.action.support.clustermanager.ShardsAcknowledgedResponse { +public abstract class ShardsAcknowledgedResponse extends AcknowledgedResponse { + + protected static final ParseField SHARDS_ACKNOWLEDGED = new ParseField("shards_acknowledged"); + + protected static void declareAcknowledgedAndShardsAcknowledgedFields( + ConstructingObjectParser objectParser + ) { + declareAcknowledgedField(objectParser); + objectParser.declareField( + constructorArg(), + (parser, context) -> parser.booleanValue(), + SHARDS_ACKNOWLEDGED, + ObjectParser.ValueType.BOOLEAN + ); + } + + private final boolean shardsAcknowledged; protected ShardsAcknowledgedResponse(StreamInput in, boolean readShardsAcknowledged) throws IOException { - super(in, readShardsAcknowledged); + super(in); + if (readShardsAcknowledged) { + this.shardsAcknowledged = in.readBoolean(); + } else { + this.shardsAcknowledged = false; + } + } + + protected ShardsAcknowledgedResponse(boolean acknowledged, boolean shardsAcknowledged) { + super(acknowledged); + assert acknowledged || shardsAcknowledged == false; // if it's not acknowledged, then shards acked should be false too + this.shardsAcknowledged = shardsAcknowledged; + } + + /** + * Returns true if the requisite number of shards were started before + * returning from the index creation operation. If {@link #isAcknowledged()} + * is false, then this also returns false. + */ + public boolean isShardsAcknowledged() { + return shardsAcknowledged; + } + + protected void writeShardsAcknowledged(StreamOutput out) throws IOException { + out.writeBoolean(shardsAcknowledged); + } + + @Override + protected void addCustomFields(XContentBuilder builder, Params params) throws IOException { + builder.field(SHARDS_ACKNOWLEDGED.getPreferredName(), isShardsAcknowledged()); + } + + @Override + public boolean equals(Object o) { + if (super.equals(o)) { + ShardsAcknowledgedResponse that = (ShardsAcknowledgedResponse) o; + return isShardsAcknowledged() == that.isShardsAcknowledged(); + } + return false; + } + + @Override + public int hashCode() { + return Objects.hash(super.hashCode(), isShardsAcknowledged()); } } diff --git a/server/src/main/java/org/opensearch/action/support/master/TransportMasterNodeAction.java b/server/src/main/java/org/opensearch/action/support/master/TransportMasterNodeAction.java index 5805baad0946b..c26fa5c343b5c 100644 --- a/server/src/main/java/org/opensearch/action/support/master/TransportMasterNodeAction.java +++ b/server/src/main/java/org/opensearch/action/support/master/TransportMasterNodeAction.java @@ -32,10 +32,11 @@ package org.opensearch.action.support.master; +import org.opensearch.action.ActionListener; import org.opensearch.action.ActionResponse; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; +import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.metadata.IndexNameExpressionResolver; import org.opensearch.cluster.service.ClusterService; import org.opensearch.common.io.stream.Writeable; @@ -46,7 +47,7 @@ * A base class for operations that needs to be performed on the cluster-manager node. * * @opensearch.internal - * @deprecated As of 2.1, because supporting inclusive language, replaced by {@link ClusterManagerNodeRequest} + * @deprecated As of 2.1, because supporting inclusive language, replaced by {@link TransportClusterManagerNodeAction} */ @Deprecated public abstract class TransportMasterNodeAction, Response extends ActionResponse> extends @@ -86,4 +87,7 @@ protected TransportMasterNodeAction( ); } + @Deprecated + protected abstract void masterOperation(Request request, ClusterState state, ActionListener listener) throws Exception; + } diff --git a/server/src/main/java/org/opensearch/action/support/master/TransportMasterNodeReadAction.java b/server/src/main/java/org/opensearch/action/support/master/TransportMasterNodeReadAction.java index 9b3d34ad3d931..0b3f309acc189 100644 --- a/server/src/main/java/org/opensearch/action/support/master/TransportMasterNodeReadAction.java +++ b/server/src/main/java/org/opensearch/action/support/master/TransportMasterNodeReadAction.java @@ -34,7 +34,6 @@ import org.opensearch.action.ActionResponse; import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeReadAction; import org.opensearch.cluster.metadata.IndexNameExpressionResolver; import org.opensearch.cluster.service.ClusterService; @@ -47,7 +46,7 @@ * Can also be executed on the local node if needed. * * @opensearch.internal - * @deprecated As of 2.1, because supporting inclusive language, replaced by {@link ClusterManagerNodeRequest} + * @deprecated As of 2.1, because supporting inclusive language, replaced by {@link TransportClusterManagerNodeReadAction} */ @Deprecated public abstract class TransportMasterNodeReadAction, Response extends ActionResponse> extends diff --git a/server/src/main/java/org/opensearch/action/support/master/info/ClusterInfoRequestBuilder.java b/server/src/main/java/org/opensearch/action/support/master/info/ClusterInfoRequestBuilder.java index c13dbe296dff2..7052e13625f97 100644 --- a/server/src/main/java/org/opensearch/action/support/master/info/ClusterInfoRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/support/master/info/ClusterInfoRequestBuilder.java @@ -33,7 +33,6 @@ import org.opensearch.action.ActionType; import org.opensearch.action.ActionResponse; -import org.opensearch.action.support.clustermanager.info.ClusterInfoRequest; import org.opensearch.client.OpenSearchClient; /** diff --git a/server/src/main/java/org/opensearch/action/support/master/info/TransportClusterInfoAction.java b/server/src/main/java/org/opensearch/action/support/master/info/TransportClusterInfoAction.java index 26d31b874f2c0..974fad445be9e 100644 --- a/server/src/main/java/org/opensearch/action/support/master/info/TransportClusterInfoAction.java +++ b/server/src/main/java/org/opensearch/action/support/master/info/TransportClusterInfoAction.java @@ -31,8 +31,10 @@ package org.opensearch.action.support.master.info; +import org.opensearch.action.ActionListener; import org.opensearch.action.ActionResponse; import org.opensearch.action.support.ActionFilters; +import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.metadata.IndexNameExpressionResolver; import org.opensearch.cluster.service.ClusterService; import org.opensearch.common.io.stream.Writeable; @@ -59,4 +61,11 @@ public TransportClusterInfoAction( super(actionName, transportService, clusterService, threadPool, actionFilters, request, indexNameExpressionResolver); } + @Deprecated + protected abstract void doMasterOperation( + Request request, + String[] concreteIndices, + ClusterState state, + ActionListener listener + ); } diff --git a/server/src/main/java/org/opensearch/action/update/TransportUpdateAction.java b/server/src/main/java/org/opensearch/action/update/TransportUpdateAction.java index c0c28f39b1e03..e86cfa70f1169 100644 --- a/server/src/main/java/org/opensearch/action/update/TransportUpdateAction.java +++ b/server/src/main/java/org/opensearch/action/update/TransportUpdateAction.java @@ -164,7 +164,7 @@ protected void doExecute(Task task, final UpdateRequest request, final ActionLis client.admin() .indices() .create( - new CreateIndexRequest().index(request.index()).cause("auto(update api)").masterNodeTimeout(request.timeout()), + new CreateIndexRequest().index(request.index()).cause("auto(update api)").clusterManagerNodeTimeout(request.timeout()), new ActionListener() { @Override public void onResponse(CreateIndexResponse result) { diff --git a/server/src/main/java/org/opensearch/client/ClusterAdminClient.java b/server/src/main/java/org/opensearch/client/ClusterAdminClient.java index 8907de6b0bac7..f4eaa979ff18c 100644 --- a/server/src/main/java/org/opensearch/client/ClusterAdminClient.java +++ b/server/src/main/java/org/opensearch/client/ClusterAdminClient.java @@ -130,7 +130,7 @@ import org.opensearch.action.ingest.SimulatePipelineRequest; import org.opensearch.action.ingest.SimulatePipelineRequestBuilder; import org.opensearch.action.ingest.SimulatePipelineResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.common.bytes.BytesReference; import org.opensearch.common.xcontent.XContentType; import org.opensearch.tasks.TaskId; diff --git a/server/src/main/java/org/opensearch/client/IndicesAdminClient.java b/server/src/main/java/org/opensearch/client/IndicesAdminClient.java index ede22df071821..c9cd0d0900b5a 100644 --- a/server/src/main/java/org/opensearch/client/IndicesAdminClient.java +++ b/server/src/main/java/org/opensearch/client/IndicesAdminClient.java @@ -124,7 +124,7 @@ import org.opensearch.action.admin.indices.validate.query.ValidateQueryRequest; import org.opensearch.action.admin.indices.validate.query.ValidateQueryRequestBuilder; import org.opensearch.action.admin.indices.validate.query.ValidateQueryResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.metadata.IndexMetadata.APIBlock; import org.opensearch.common.Nullable; diff --git a/server/src/main/java/org/opensearch/client/support/AbstractClient.java b/server/src/main/java/org/opensearch/client/support/AbstractClient.java index 8465d410b8ea2..6cc0827310bd1 100644 --- a/server/src/main/java/org/opensearch/client/support/AbstractClient.java +++ b/server/src/main/java/org/opensearch/client/support/AbstractClient.java @@ -339,7 +339,7 @@ import org.opensearch.action.search.SearchScrollRequest; import org.opensearch.action.search.SearchScrollRequestBuilder; import org.opensearch.action.support.PlainActionFuture; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.termvectors.MultiTermVectorsAction; import org.opensearch.action.termvectors.MultiTermVectorsRequest; import org.opensearch.action.termvectors.MultiTermVectorsRequestBuilder; diff --git a/server/src/main/java/org/opensearch/cluster/LocalNodeMasterListener.java b/server/src/main/java/org/opensearch/cluster/LocalNodeMasterListener.java index 612141807ab14..bec2674f5d549 100644 --- a/server/src/main/java/org/opensearch/cluster/LocalNodeMasterListener.java +++ b/server/src/main/java/org/opensearch/cluster/LocalNodeMasterListener.java @@ -42,21 +42,21 @@ public interface LocalNodeMasterListener extends ClusterStateListener { /** * Called when local node is elected to be the cluster-manager */ - void onClusterManager(); + void onMaster(); /** * Called when the local node used to be the cluster-manager, a new cluster-manager was elected and it's no longer the local node. */ - void offClusterManager(); + void offMaster(); @Override default void clusterChanged(ClusterChangedEvent event) { final boolean wasClusterManager = event.previousState().nodes().isLocalNodeElectedMaster(); final boolean isClusterManager = event.localNodeMaster(); if (wasClusterManager == false && isClusterManager) { - onClusterManager(); + onMaster(); } else if (wasClusterManager && isClusterManager == false) { - offClusterManager(); + offMaster(); } } } diff --git a/server/src/main/java/org/opensearch/cluster/action/index/MappingUpdatedAction.java b/server/src/main/java/org/opensearch/cluster/action/index/MappingUpdatedAction.java index a183e195707af..2c4eff5f6d00a 100644 --- a/server/src/main/java/org/opensearch/cluster/action/index/MappingUpdatedAction.java +++ b/server/src/main/java/org/opensearch/cluster/action/index/MappingUpdatedAction.java @@ -108,7 +108,7 @@ public void setClient(Client client) { /** * Update mappings on the cluster-manager node, waiting for the change to be committed, * but not for the mapping update to be applied on all nodes. The timeout specified by - * {@code timeout} is the cluster-manager node timeout ({@link ClusterManagerNodeRequest#masterNodeTimeout()}), + * {@code timeout} is the cluster-manager node timeout ({@link ClusterManagerNodeRequest#clusterManagerNodeTimeout()}), * potentially waiting for a cluster-manager node to be available. */ public void updateMappingOnMaster(Index index, Mapping mappingUpdate, ActionListener listener) { @@ -142,7 +142,7 @@ protected void sendUpdateMapping(Index index, Mapping mappingUpdate, ActionListe PutMappingRequest putMappingRequest = new PutMappingRequest(); putMappingRequest.setConcreteIndex(index); putMappingRequest.source(mappingUpdate.toString(), XContentType.JSON); - putMappingRequest.masterNodeTimeout(dynamicMappingUpdateTimeout); + putMappingRequest.clusterManagerNodeTimeout(dynamicMappingUpdateTimeout); putMappingRequest.timeout(TimeValue.ZERO); if (clusterService.state().nodes().getMinNodeVersion().onOrAfter(LegacyESVersion.V_7_9_0)) { client.execute( diff --git a/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateDataStreamService.java b/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateDataStreamService.java index 97f198e087a93..412d4dba628cb 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateDataStreamService.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateDataStreamService.java @@ -40,7 +40,7 @@ import org.opensearch.action.admin.indices.create.CreateIndexClusterStateUpdateRequest; import org.opensearch.action.support.ActiveShardCount; import org.opensearch.action.support.ActiveShardsObserver; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.AckedClusterStateUpdateTask; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.ack.ClusterStateUpdateRequest; diff --git a/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java b/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java index 2ea0b6b5de2e9..6dded44fe70bb 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java @@ -41,7 +41,7 @@ import org.opensearch.Version; import org.opensearch.action.ActionListener; import org.opensearch.action.admin.indices.alias.Alias; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.ClusterStateUpdateTask; @@ -1526,7 +1526,7 @@ public static class PutRequest { String mappings = null; List aliases = new ArrayList<>(); - TimeValue masterTimeout = ClusterManagerNodeRequest.DEFAULT_MASTER_NODE_TIMEOUT; + TimeValue masterTimeout = ClusterManagerNodeRequest.DEFAULT_CLUSTER_MANAGER_NODE_TIMEOUT; public PutRequest(String cause, String name) { this.cause = cause; @@ -1598,7 +1598,7 @@ public boolean acknowledged() { */ public static class RemoveRequest { final String name; - TimeValue masterTimeout = ClusterManagerNodeRequest.DEFAULT_MASTER_NODE_TIMEOUT; + TimeValue masterTimeout = ClusterManagerNodeRequest.DEFAULT_CLUSTER_MANAGER_NODE_TIMEOUT; public RemoveRequest(String name) { this.name = name; diff --git a/server/src/main/java/org/opensearch/cluster/metadata/TemplateUpgradeService.java b/server/src/main/java/org/opensearch/cluster/metadata/TemplateUpgradeService.java index 12faf25731657..01cadf3910267 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/TemplateUpgradeService.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/TemplateUpgradeService.java @@ -40,7 +40,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.admin.indices.template.delete.DeleteIndexTemplateRequest; import org.opensearch.action.admin.indices.template.put.PutIndexTemplateRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.cluster.ClusterChangedEvent; import org.opensearch.cluster.ClusterState; @@ -165,7 +165,7 @@ void upgradeTemplates(Map changes, Set deletions for (Map.Entry change : changes.entrySet()) { PutIndexTemplateRequest request = new PutIndexTemplateRequest(change.getKey()).source(change.getValue(), XContentType.JSON); - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); client.admin().indices().putTemplate(request, new ActionListener() { @Override public void onResponse(AcknowledgedResponse response) { @@ -187,7 +187,7 @@ public void onFailure(Exception e) { for (String template : deletions) { DeleteIndexTemplateRequest request = new DeleteIndexTemplateRequest(template); - request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); + request.clusterManagerNodeTimeout(TimeValue.timeValueMinutes(1)); client.admin().indices().deleteTemplate(request, new ActionListener() { @Override public void onResponse(AcknowledgedResponse response) { diff --git a/server/src/main/java/org/opensearch/common/settings/ConsistentSettingsService.java b/server/src/main/java/org/opensearch/common/settings/ConsistentSettingsService.java index 8be242165afd1..3be1c4b080b5f 100644 --- a/server/src/main/java/org/opensearch/common/settings/ConsistentSettingsService.java +++ b/server/src/main/java/org/opensearch/common/settings/ConsistentSettingsService.java @@ -258,7 +258,7 @@ static final class HashesPublisher implements LocalNodeMasterListener { } @Override - public void onClusterManager() { + public void onMaster() { clusterService.submitStateUpdateTask("publish-secure-settings-hashes", new ClusterStateUpdateTask(Priority.URGENT) { @Override public ClusterState execute(ClusterState currentState) { @@ -284,7 +284,7 @@ public void onFailure(String source, Exception e) { } @Override - public void offClusterManager() { + public void offMaster() { logger.trace("I am no longer master, nothing to do"); } } diff --git a/server/src/main/java/org/opensearch/index/engine/Engine.java b/server/src/main/java/org/opensearch/index/engine/Engine.java index 4829148322b31..66fc680beb62c 100644 --- a/server/src/main/java/org/opensearch/index/engine/Engine.java +++ b/server/src/main/java/org/opensearch/index/engine/Engine.java @@ -117,7 +117,7 @@ * * @opensearch.internal */ -public abstract class Engine implements Closeable { +public abstract class Engine implements LifecycleAware, Closeable { public static final String SYNC_COMMIT_ID = "sync_id"; // TODO: remove sync_id in 3.0 public static final String HISTORY_UUID_KEY = "history_uuid"; @@ -173,6 +173,7 @@ public final EngineConfig config() { * Return the latest active SegmentInfos from the engine. * @return {@link SegmentInfos} */ + @Nullable protected abstract SegmentInfos getLatestSegmentInfos(); /** @@ -847,7 +848,7 @@ protected final void ensureOpen(Exception suppressed) { } } - protected final void ensureOpen() { + public final void ensureOpen() { ensureOpen(null); } diff --git a/server/src/main/java/org/opensearch/index/engine/InternalEngine.java b/server/src/main/java/org/opensearch/index/engine/InternalEngine.java index b63a39ebb1222..d2d688a90353e 100644 --- a/server/src/main/java/org/opensearch/index/engine/InternalEngine.java +++ b/server/src/main/java/org/opensearch/index/engine/InternalEngine.java @@ -2289,7 +2289,7 @@ protected SegmentInfos getLastCommittedSegmentInfos() { } @Override - public SegmentInfos getLatestSegmentInfos() { + protected SegmentInfos getLatestSegmentInfos() { OpenSearchDirectoryReader reader = null; try { reader = internalReaderManager.acquire(); diff --git a/server/src/main/java/org/opensearch/index/engine/LifecycleAware.java b/server/src/main/java/org/opensearch/index/engine/LifecycleAware.java new file mode 100644 index 0000000000000..06cfb8e7e73a5 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/engine/LifecycleAware.java @@ -0,0 +1,20 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.engine; + +/** + * Interface that is aware of a component lifecycle. + */ +public interface LifecycleAware { + + /** + * Checks to ensure if the component is an open state + */ + void ensureOpen(); +} diff --git a/server/src/main/java/org/opensearch/index/shard/IndexShard.java b/server/src/main/java/org/opensearch/index/shard/IndexShard.java index bad412003df26..d25847dde235c 100644 --- a/server/src/main/java/org/opensearch/index/shard/IndexShard.java +++ b/server/src/main/java/org/opensearch/index/shard/IndexShard.java @@ -109,6 +109,7 @@ import org.opensearch.index.engine.EngineConfigFactory; import org.opensearch.index.engine.EngineException; import org.opensearch.index.engine.EngineFactory; +import org.opensearch.index.engine.NRTReplicationEngine; import org.opensearch.index.engine.ReadOnlyEngine; import org.opensearch.index.engine.RefreshFailedEngineException; import org.opensearch.index.engine.SafeCommitInfo; @@ -160,9 +161,9 @@ import org.opensearch.indices.recovery.RecoveryListener; import org.opensearch.indices.recovery.RecoveryState; import org.opensearch.indices.recovery.RecoveryTarget; -import org.opensearch.indices.replication.checkpoint.PublishCheckpointRequest; import org.opensearch.indices.replication.checkpoint.SegmentReplicationCheckpointPublisher; import org.opensearch.indices.replication.checkpoint.ReplicationCheckpoint; +import org.opensearch.indices.replication.checkpoint.SegmentReplicationCheckpointPublisher; import org.opensearch.repositories.RepositoriesService; import org.opensearch.repositories.Repository; import org.opensearch.rest.RestStatus; @@ -1363,6 +1364,20 @@ public GatedCloseable acquireLastIndexCommit(boolean flushFirst) th } } + private Optional getReplicationEngine() { + if (getEngine() instanceof NRTReplicationEngine) { + return Optional.of((NRTReplicationEngine) getEngine()); + } else { + return Optional.empty(); + } + } + + public void finalizeReplication(SegmentInfos infos, long seqNo) throws IOException { + if (getReplicationEngine().isPresent()) { + getReplicationEngine().get().updateSegments(infos, seqNo); + } + } + /** * Snapshots the most recent safe index commit from the currently running engine. * All index files referenced by this index commit won't be freed until the commit/snapshot is closed. @@ -1381,15 +1396,60 @@ public GatedCloseable acquireSafeIndexCommit() throws EngineExcepti * Returns the lastest Replication Checkpoint that shard received */ public ReplicationCheckpoint getLatestReplicationCheckpoint() { - return new ReplicationCheckpoint(shardId, 0, 0, 0, 0); + try (final GatedCloseable snapshot = getSegmentInfosSnapshot()) { + return Optional.ofNullable(snapshot.get()) + .map( + segmentInfos -> new ReplicationCheckpoint( + this.shardId, + getOperationPrimaryTerm(), + segmentInfos.getGeneration(), + getProcessedLocalCheckpoint(), + segmentInfos.getVersion() + ) + ) + .orElse( + new ReplicationCheckpoint( + shardId, + getOperationPrimaryTerm(), + SequenceNumbers.NO_OPS_PERFORMED, + getProcessedLocalCheckpoint(), + SequenceNumbers.NO_OPS_PERFORMED + ) + ); + } catch (IOException ex) { + throw new OpenSearchException("Error Closing SegmentInfos Snapshot", ex); + } } /** - * Invoked when a new checkpoint is received from a primary shard. Starts the copy process. - */ - public synchronized void onNewCheckpoint(final PublishCheckpointRequest request) { - assert shardRouting.primary() == false; - // TODO + * Checks if checkpoint should be processed + * + * @param requestCheckpoint received checkpoint that is checked for processing + * @return true if checkpoint should be processed + */ + public final boolean shouldProcessCheckpoint(ReplicationCheckpoint requestCheckpoint) { + if (state().equals(IndexShardState.STARTED) == false) { + logger.trace(() -> new ParameterizedMessage("Ignoring new replication checkpoint - shard is not started {}", state())); + return false; + } + ReplicationCheckpoint localCheckpoint = getLatestReplicationCheckpoint(); + if (localCheckpoint.isAheadOf(requestCheckpoint)) { + logger.trace( + () -> new ParameterizedMessage( + "Ignoring new replication checkpoint - Shard is already on checkpoint {} that is ahead of {}", + localCheckpoint, + requestCheckpoint + ) + ); + return false; + } + if (localCheckpoint.equals(requestCheckpoint)) { + logger.trace( + () -> new ParameterizedMessage("Ignoring new replication checkpoint - Shard is already on checkpoint {}", requestCheckpoint) + ); + return false; + } + return true; } /** diff --git a/server/src/main/java/org/opensearch/index/store/Store.java b/server/src/main/java/org/opensearch/index/store/Store.java index f818456c3a2c8..2309004c0777d 100644 --- a/server/src/main/java/org/opensearch/index/store/Store.java +++ b/server/src/main/java/org/opensearch/index/store/Store.java @@ -64,6 +64,7 @@ import org.apache.lucene.util.BytesRefBuilder; import org.apache.lucene.util.Version; import org.opensearch.ExceptionsHelper; +import org.opensearch.common.Nullable; import org.opensearch.common.UUIDs; import org.opensearch.common.bytes.BytesReference; import org.opensearch.common.io.Streams; @@ -706,6 +707,51 @@ public void cleanupAndVerify(String reason, MetadataSnapshot sourceMetadata) thr } } + /** + * This method deletes every file in this store that is not contained in either the remote or local metadata snapshots. + * This method is used for segment replication when the in memory SegmentInfos can be ahead of the on disk segment file. + * In this case files from both snapshots must be preserved. Verification has been done that all files are present on disk. + * @param reason the reason for this cleanup operation logged for each deleted file + * @param localSnapshot The local snapshot from in memory SegmentInfos. + * @throws IllegalStateException if the latest snapshot in this store differs from the given one after the cleanup. + */ + public void cleanupAndPreserveLatestCommitPoint(String reason, MetadataSnapshot localSnapshot) throws IOException { + // fetch a snapshot from the latest on disk Segments_N file. This can be behind + // the passed in local in memory snapshot, so we want to ensure files it references are not removed. + metadataLock.writeLock().lock(); + try (Lock writeLock = directory.obtainLock(IndexWriter.WRITE_LOCK_NAME)) { + cleanupFiles(reason, localSnapshot, getMetadata(readLastCommittedSegmentsInfo())); + } finally { + metadataLock.writeLock().unlock(); + } + } + + private void cleanupFiles(String reason, MetadataSnapshot localSnapshot, @Nullable MetadataSnapshot additionalSnapshot) + throws IOException { + assert metadataLock.isWriteLockedByCurrentThread(); + for (String existingFile : directory.listAll()) { + if (Store.isAutogenerated(existingFile) + || localSnapshot.contains(existingFile) + || (additionalSnapshot != null && additionalSnapshot.contains(existingFile))) { + // don't delete snapshot file, or the checksums file (note, this is extra protection since the Store won't delete + // checksum) + continue; + } + try { + directory.deleteFile(reason, existingFile); + } catch (IOException ex) { + if (existingFile.startsWith(IndexFileNames.SEGMENTS) || existingFile.startsWith(CORRUPTED_MARKER_NAME_PREFIX)) { + // TODO do we need to also fail this if we can't delete the pending commit file? + // if one of those files can't be deleted we better fail the cleanup otherwise we might leave an old commit + // point around? + throw new IllegalStateException("Can't delete " + existingFile + " - cleanup failed", ex); + } + logger.debug(() -> new ParameterizedMessage("failed to delete file [{}]", existingFile), ex); + // ignore, we don't really care, will get deleted later on + } + } + } + // pkg private for testing final void verifyAfterCleanup(MetadataSnapshot sourceMetadata, MetadataSnapshot targetMetadata) { final RecoveryDiff recoveryDiff = targetMetadata.recoveryDiff(sourceMetadata); diff --git a/server/src/main/java/org/opensearch/index/translog/InternalTranslogManager.java b/server/src/main/java/org/opensearch/index/translog/InternalTranslogManager.java new file mode 100644 index 0000000000000..22f72cc3d9acd --- /dev/null +++ b/server/src/main/java/org/opensearch/index/translog/InternalTranslogManager.java @@ -0,0 +1,322 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.translog; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.lucene.store.AlreadyClosedException; +import org.opensearch.common.util.concurrent.ReleasableLock; +import org.opensearch.index.engine.LifecycleAware; +import org.opensearch.index.seqno.LocalCheckpointTracker; +import org.opensearch.index.shard.ShardId; +import org.opensearch.index.translog.listener.TranslogEventListener; + +import java.io.IOException; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.LongConsumer; +import java.util.function.LongSupplier; +import java.util.function.Supplier; +import java.util.stream.Stream; + +/** + * The {@link TranslogManager} implementation capable of orchestrating all read/write {@link Translog} operations while + * interfacing with the {@link org.opensearch.index.engine.InternalEngine} + * + * @opensearch.internal + */ +public class InternalTranslogManager implements TranslogManager { + + private final ReleasableLock readLock; + private final LifecycleAware engineLifeCycleAware; + private final ShardId shardId; + private final Translog translog; + private final AtomicBoolean pendingTranslogRecovery = new AtomicBoolean(false); + private final TranslogEventListener translogEventListener; + private static final Logger logger = LogManager.getLogger(InternalTranslogManager.class); + + public InternalTranslogManager( + TranslogConfig translogConfig, + LongSupplier primaryTermSupplier, + LongSupplier globalCheckpointSupplier, + TranslogDeletionPolicy translogDeletionPolicy, + ShardId shardId, + ReleasableLock readLock, + Supplier localCheckpointTrackerSupplier, + String translogUUID, + TranslogEventListener translogEventListener, + LifecycleAware engineLifeCycleAware + ) throws IOException { + this.shardId = shardId; + this.readLock = readLock; + this.engineLifeCycleAware = engineLifeCycleAware; + this.translogEventListener = translogEventListener; + Translog translog = openTranslog(translogConfig, primaryTermSupplier, translogDeletionPolicy, globalCheckpointSupplier, seqNo -> { + final LocalCheckpointTracker tracker = localCheckpointTrackerSupplier.get(); + assert tracker != null || getTranslog(true).isOpen() == false; + if (tracker != null) { + tracker.markSeqNoAsPersisted(seqNo); + } + }, translogUUID); + assert translog.getGeneration() != null; + this.translog = translog; + assert pendingTranslogRecovery.get() == false : "translog recovery can't be pending before we set it"; + // don't allow commits until we are done with recovering + pendingTranslogRecovery.set(true); + } + + /** + * Rolls the translog generation and cleans unneeded. + */ + @Override + public void rollTranslogGeneration() throws TranslogException { + try (ReleasableLock ignored = readLock.acquire()) { + engineLifeCycleAware.ensureOpen(); + translog.rollGeneration(); + translog.trimUnreferencedReaders(); + } catch (AlreadyClosedException e) { + translogEventListener.onTragicFailure(e); + throw e; + } catch (Exception e) { + try { + translogEventListener.onFailure("translog trimming failed", e); + } catch (Exception inner) { + e.addSuppressed(inner); + } + throw new TranslogException(shardId, "failed to roll translog", e); + } + } + + /** + * Performs recovery from the transaction log up to {@code recoverUpToSeqNo} (inclusive). + * This operation will close the engine if the recovery fails. + * @param translogRecoveryRunner the translog recovery runner + * @param recoverUpToSeqNo the upper bound, inclusive, of sequence number to be recovered + * @return the total number of operations recovered + */ + @Override + public int recoverFromTranslog(TranslogRecoveryRunner translogRecoveryRunner, long localCheckpoint, long recoverUpToSeqNo) + throws IOException { + int opsRecovered = 0; + translogEventListener.onBeginTranslogRecovery(); + try (ReleasableLock ignored = readLock.acquire()) { + engineLifeCycleAware.ensureOpen(); + if (pendingTranslogRecovery.get() == false) { + throw new IllegalStateException("Engine has already been recovered"); + } + try { + opsRecovered = recoverFromTranslogInternal(translogRecoveryRunner, localCheckpoint, recoverUpToSeqNo); + } catch (Exception e) { + try { + pendingTranslogRecovery.set(true); // just play safe and never allow commits on this see #ensureCanFlush + translogEventListener.onFailure("failed to recover from translog", e); + } catch (Exception inner) { + e.addSuppressed(inner); + } + throw e; + } + } + return opsRecovered; + } + + private int recoverFromTranslogInternal(TranslogRecoveryRunner translogRecoveryRunner, long localCheckpoint, long recoverUpToSeqNo) { + final int opsRecovered; + if (localCheckpoint < recoverUpToSeqNo) { + try (Translog.Snapshot snapshot = translog.newSnapshot(localCheckpoint + 1, recoverUpToSeqNo)) { + opsRecovered = translogRecoveryRunner.run(snapshot); + } catch (Exception e) { + throw new TranslogException(shardId, "failed to recover from translog", e); + } + } else { + opsRecovered = 0; + } + // flush if we recovered something or if we have references to older translogs + // note: if opsRecovered == 0 and we have older translogs it means they are corrupted or 0 length. + assert pendingTranslogRecovery.get() : "translogRecovery is not pending but should be"; + pendingTranslogRecovery.set(false); // we are good - now we can commit + logger.trace( + () -> new ParameterizedMessage( + "flushing post recovery from translog: ops recovered [{}], current translog generation [{}]", + opsRecovered, + translog.currentFileGeneration() + ) + ); + translogEventListener.onAfterTranslogRecovery(); + return opsRecovered; + } + + /** + * Checks if the underlying storage sync is required. + */ + @Override + public boolean isTranslogSyncNeeded() { + return getTranslog(true).syncNeeded(); + } + + /** + * Ensures that all locations in the given stream have been written to the underlying storage. + */ + @Override + public boolean ensureTranslogSynced(Stream locations) throws IOException { + final boolean synced = translog.ensureSynced(locations); + if (synced) { + translogEventListener.onAfterTranslogSync(); + } + return synced; + } + + /** + * Syncs the translog and invokes the listener + * @throws IOException the exception on sync failure + */ + @Override + public void syncTranslog() throws IOException { + translog.sync(); + translogEventListener.onAfterTranslogSync(); + } + + @Override + public TranslogStats getTranslogStats() { + return getTranslog(true).stats(); + } + + /** + * Returns the last location that the translog of this engine has written into. + */ + @Override + public Translog.Location getTranslogLastWriteLocation() { + return getTranslog(true).getLastWriteLocation(); + } + + /** + * checks and removes translog files that no longer need to be retained. See + * {@link org.opensearch.index.translog.TranslogDeletionPolicy} for details + */ + @Override + public void trimUnreferencedTranslogFiles() throws TranslogException { + try (ReleasableLock ignored = readLock.acquire()) { + engineLifeCycleAware.ensureOpen(); + translog.trimUnreferencedReaders(); + } catch (AlreadyClosedException e) { + translogEventListener.onTragicFailure(e); + throw e; + } catch (Exception e) { + try { + translogEventListener.onFailure("translog trimming failed", e); + } catch (Exception inner) { + e.addSuppressed(inner); + } + throw new TranslogException(shardId, "failed to trim translog", e); + } + } + + /** + * Tests whether or not the translog generation should be rolled to a new generation. + * This test is based on the size of the current generation compared to the configured generation threshold size. + * + * @return {@code true} if the current generation should be rolled to a new generation + */ + @Override + public boolean shouldRollTranslogGeneration() { + return getTranslog(true).shouldRollGeneration(); + } + + /** + * Trims translog for terms below belowTerm and seq# above aboveSeqNo + * @see Translog#trimOperations(long, long) + */ + @Override + public void trimOperationsFromTranslog(long belowTerm, long aboveSeqNo) throws TranslogException { + try (ReleasableLock ignored = readLock.acquire()) { + engineLifeCycleAware.ensureOpen(); + translog.trimOperations(belowTerm, aboveSeqNo); + } catch (AlreadyClosedException e) { + translogEventListener.onTragicFailure(e); + throw e; + } catch (Exception e) { + try { + translogEventListener.onFailure("translog operations trimming failed", e); + } catch (Exception inner) { + e.addSuppressed(inner); + } + throw new TranslogException(shardId, "failed to trim translog operations", e); + } + } + + /** + * This method replays translog to restore the Lucene index which might be reverted previously. + * This ensures that all acknowledged writes are restored correctly when this engine is promoted. + * + * @return the number of translog operations have been recovered + */ + @Override + public int restoreLocalHistoryFromTranslog(long processedCheckpoint, TranslogRecoveryRunner translogRecoveryRunner) throws IOException { + try (ReleasableLock ignored = readLock.acquire()) { + engineLifeCycleAware.ensureOpen(); + try (Translog.Snapshot snapshot = getTranslog(true).newSnapshot(processedCheckpoint + 1, Long.MAX_VALUE)) { + return translogRecoveryRunner.run(snapshot); + } + } + } + + /** + * Ensures that the flushes can succeed if there are no pending translog recovery + */ + @Override + public void ensureCanFlush() { + // translog recovery happens after the engine is fully constructed. + // If we are in this stage we have to prevent flushes from this + // engine otherwise we might loose documents if the flush succeeds + // and the translog recovery fails when we "commit" the translog on flush. + if (pendingTranslogRecovery.get()) { + throw new IllegalStateException(shardId.toString() + " flushes are disabled - pending translog recovery"); + } + } + + /** + * Do not replay translog operations, but make the engine be ready. + */ + @Override + public void skipTranslogRecovery() { + assert pendingTranslogRecovery.get() : "translogRecovery is not pending but should be"; + pendingTranslogRecovery.set(false); // we are good - now we can commit + } + + private Translog openTranslog( + TranslogConfig translogConfig, + LongSupplier primaryTermSupplier, + TranslogDeletionPolicy translogDeletionPolicy, + LongSupplier globalCheckpointSupplier, + LongConsumer persistedSequenceNumberConsumer, + String translogUUID + ) throws IOException { + + return new Translog( + translogConfig, + translogUUID, + translogDeletionPolicy, + globalCheckpointSupplier, + primaryTermSupplier, + persistedSequenceNumberConsumer + ); + } + + /** + * Returns the the translog instance + * @param ensureOpen check if the engine is open + * @return the {@link Translog} instance + */ + @Override + public Translog getTranslog(boolean ensureOpen) { + if (ensureOpen) { + this.engineLifeCycleAware.ensureOpen(); + } + return translog; + } +} diff --git a/server/src/main/java/org/opensearch/index/translog/NoOpTranslogManager.java b/server/src/main/java/org/opensearch/index/translog/NoOpTranslogManager.java new file mode 100644 index 0000000000000..07cae808ce071 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/translog/NoOpTranslogManager.java @@ -0,0 +1,110 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.translog; + +import org.opensearch.common.util.concurrent.ReleasableLock; +import org.opensearch.index.shard.ShardId; + +import java.io.IOException; +import java.util.stream.Stream; + +/** + * The no-op implementation of {@link TranslogManager} that doesn't perform any operation + * + * @opensearch.internal + */ +public class NoOpTranslogManager implements TranslogManager { + + private final Translog.Snapshot emptyTranslogSnapshot; + private final ReleasableLock readLock; + private final Runnable ensureOpen; + private final ShardId shardId; + private final TranslogStats translogStats; + + public NoOpTranslogManager( + ShardId shardId, + ReleasableLock readLock, + Runnable ensureOpen, + TranslogStats translogStats, + Translog.Snapshot emptyTranslogSnapshot + ) throws IOException { + this.emptyTranslogSnapshot = emptyTranslogSnapshot; + this.readLock = readLock; + this.shardId = shardId; + this.ensureOpen = ensureOpen; + this.translogStats = translogStats; + } + + @Override + public void rollTranslogGeneration() throws TranslogException {} + + @Override + public int recoverFromTranslog(TranslogRecoveryRunner translogRecoveryRunner, long localCheckpoint, long recoverUpToSeqNo) + throws IOException { + try (ReleasableLock lock = readLock.acquire()) { + ensureOpen.run(); + try (Translog.Snapshot snapshot = emptyTranslogSnapshot) { + translogRecoveryRunner.run(snapshot); + } catch (final Exception e) { + throw new TranslogException(shardId, "failed to recover from empty translog snapshot", e); + } + } + return emptyTranslogSnapshot.totalOperations(); + } + + @Override + public boolean isTranslogSyncNeeded() { + return false; + } + + @Override + public boolean ensureTranslogSynced(Stream locations) throws IOException { + return false; + } + + @Override + public void syncTranslog() throws IOException {} + + @Override + public TranslogStats getTranslogStats() { + return translogStats; + } + + @Override + public Translog.Location getTranslogLastWriteLocation() { + return new Translog.Location(0, 0, 0); + } + + @Override + public void trimUnreferencedTranslogFiles() throws TranslogException {} + + @Override + public boolean shouldRollTranslogGeneration() { + return false; + } + + @Override + public void trimOperationsFromTranslog(long belowTerm, long aboveSeqNo) throws TranslogException {} + + @Override + public Translog getTranslog(boolean ensureOpen) { + return null; + } + + @Override + public void ensureCanFlush() {} + + @Override + public int restoreLocalHistoryFromTranslog(long processedCheckpoint, TranslogRecoveryRunner translogRecoveryRunner) throws IOException { + return 0; + } + + @Override + public void skipTranslogRecovery() {} +} diff --git a/server/src/main/java/org/opensearch/index/translog/TranslogManager.java b/server/src/main/java/org/opensearch/index/translog/TranslogManager.java new file mode 100644 index 0000000000000..988a88c5d2ae5 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/translog/TranslogManager.java @@ -0,0 +1,108 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.translog; + +import java.io.IOException; +import java.util.stream.Stream; + +/** + * The interface that orchestrates Translog operations and manages the {@link Translog} and interfaces with the Engine + * + * @opensearch.internal + */ +public interface TranslogManager { + + /** + * Rolls the translog generation and cleans unneeded. + */ + void rollTranslogGeneration() throws TranslogException; + + /** + * Performs recovery from the transaction log up to {@code recoverUpToSeqNo} (inclusive). + * This operation will close the engine if the recovery fails. + * + * @param translogRecoveryRunner the translog recovery runner + * @param recoverUpToSeqNo the upper bound, inclusive, of sequence number to be recovered + * @return ops recovered + */ + int recoverFromTranslog(TranslogRecoveryRunner translogRecoveryRunner, long localCheckpoint, long recoverUpToSeqNo) throws IOException; + + /** + * Checks if the underlying storage sync is required. + */ + boolean isTranslogSyncNeeded(); + + /** + * Ensures that all locations in the given stream have been written to the underlying storage. + */ + boolean ensureTranslogSynced(Stream locations) throws IOException; + + /** + * Syncs translog to disk + * @throws IOException the exception while performing the sync operation + */ + void syncTranslog() throws IOException; + + /** + * Translog operation stats + * @return the translog stats + */ + TranslogStats getTranslogStats(); + + /** + * Returns the last location that the translog of this engine has written into. + */ + Translog.Location getTranslogLastWriteLocation(); + + /** + * checks and removes translog files that no longer need to be retained. See + * {@link org.opensearch.index.translog.TranslogDeletionPolicy} for details + */ + void trimUnreferencedTranslogFiles() throws TranslogException; + + /** + * Tests whether or not the translog generation should be rolled to a new generation. + * This test is based on the size of the current generation compared to the configured generation threshold size. + * + * @return {@code true} if the current generation should be rolled to a new generation + */ + boolean shouldRollTranslogGeneration(); + + /** + * Trims translog for terms below belowTerm and seq# above aboveSeqNo + * + * @see Translog#trimOperations(long, long) + */ + void trimOperationsFromTranslog(long belowTerm, long aboveSeqNo) throws TranslogException; + + /** + * This method replays translog to restore the Lucene index which might be reverted previously. + * This ensures that all acknowledged writes are restored correctly when this engine is promoted. + * + * @return the number of translog operations have been recovered + */ + int restoreLocalHistoryFromTranslog(long processedCheckpoint, TranslogRecoveryRunner translogRecoveryRunner) throws IOException; + + /** + * Do not replay translog operations, but make the engine be ready. + */ + void skipTranslogRecovery(); + + /** + * Returns the instance of the translog with a precondition + * @param ensureOpen check if the engine is open + * @return the translog instance + */ + Translog getTranslog(boolean ensureOpen); + + /** + * Checks if the translog has a pending recovery + */ + void ensureCanFlush(); +} diff --git a/server/src/main/java/org/opensearch/index/translog/TranslogRecoveryRunner.java b/server/src/main/java/org/opensearch/index/translog/TranslogRecoveryRunner.java new file mode 100644 index 0000000000000..91c9a95b07d58 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/translog/TranslogRecoveryRunner.java @@ -0,0 +1,28 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.translog; + +import java.io.IOException; + +/** + * The interface that defines how {@link Translog.Snapshot} will get replayed into the Engine + * + * @opensearch.internal + */ +@FunctionalInterface +public interface TranslogRecoveryRunner { + + /** + * Recovers a translog snapshot + * @param snapshot the snapshot of translog operations + * @return recoveredOps + * @throws IOException exception while recovering operations + */ + int run(Translog.Snapshot snapshot) throws IOException; +} diff --git a/server/src/main/java/org/opensearch/index/translog/WriteOnlyTranslogManager.java b/server/src/main/java/org/opensearch/index/translog/WriteOnlyTranslogManager.java new file mode 100644 index 0000000000000..09f5f38a9f6a9 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/translog/WriteOnlyTranslogManager.java @@ -0,0 +1,69 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.translog; + +import org.opensearch.common.util.concurrent.ReleasableLock; +import org.opensearch.index.engine.LifecycleAware; +import org.opensearch.index.seqno.LocalCheckpointTracker; +import org.opensearch.index.shard.ShardId; +import org.opensearch.index.translog.listener.TranslogEventListener; + +import java.io.IOException; +import java.util.function.LongSupplier; +import java.util.function.Supplier; + +/*** + * The implementation of {@link TranslogManager} that only orchestrates writes to the underlying {@link Translog} + * + * @opensearch.internal + */ +public class WriteOnlyTranslogManager extends InternalTranslogManager { + + public WriteOnlyTranslogManager( + TranslogConfig translogConfig, + LongSupplier primaryTermSupplier, + LongSupplier globalCheckpointSupplier, + TranslogDeletionPolicy translogDeletionPolicy, + ShardId shardId, + ReleasableLock readLock, + Supplier localCheckpointTrackerSupplier, + String translogUUID, + TranslogEventListener translogEventListener, + LifecycleAware engineLifecycleAware + ) throws IOException { + super( + translogConfig, + primaryTermSupplier, + globalCheckpointSupplier, + translogDeletionPolicy, + shardId, + readLock, + localCheckpointTrackerSupplier, + translogUUID, + translogEventListener, + engineLifecycleAware + ); + } + + @Override + public int restoreLocalHistoryFromTranslog(long processedCheckpoint, TranslogRecoveryRunner translogRecoveryRunner) throws IOException { + return 0; + } + + @Override + public int recoverFromTranslog(TranslogRecoveryRunner translogRecoveryRunner, long localCheckpoint, long recoverUpToSeqNo) + throws IOException { + throw new UnsupportedOperationException("Read only replicas do not have an IndexWriter and cannot recover from a translog."); + } + + @Override + public void skipTranslogRecovery() { + // Do nothing. + } +} diff --git a/server/src/main/java/org/opensearch/index/translog/listener/CompositeTranslogEventListener.java b/server/src/main/java/org/opensearch/index/translog/listener/CompositeTranslogEventListener.java new file mode 100644 index 0000000000000..731b069ab0c74 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/translog/listener/CompositeTranslogEventListener.java @@ -0,0 +1,110 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.translog.listener; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.lucene.store.AlreadyClosedException; +import org.opensearch.ExceptionsHelper; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; + +/** + * The listener that multiplexes other {@link TranslogEventListener} + * + * @opensearch.internal + */ +public final class CompositeTranslogEventListener implements TranslogEventListener { + + private final List listeners; + private final Logger logger = LogManager.getLogger(CompositeTranslogEventListener.class); + + public CompositeTranslogEventListener(Collection listeners) { + for (TranslogEventListener listener : listeners) { + if (listener == null) { + throw new IllegalArgumentException("listeners must be non-null"); + } + } + this.listeners = Collections.unmodifiableList(new ArrayList<>(listeners)); + } + + @Override + public void onAfterTranslogSync() { + List exceptionList = new ArrayList<>(listeners.size()); + for (TranslogEventListener listener : listeners) { + try { + listener.onAfterTranslogSync(); + } catch (Exception ex) { + logger.warn(() -> new ParameterizedMessage("failed to invoke onTranslogSync listener"), ex); + exceptionList.add(ex); + } + } + ExceptionsHelper.maybeThrowRuntimeAndSuppress(exceptionList); + } + + @Override + public void onAfterTranslogRecovery() { + List exceptionList = new ArrayList<>(listeners.size()); + for (TranslogEventListener listener : listeners) { + try { + listener.onAfterTranslogRecovery(); + } catch (Exception ex) { + logger.warn(() -> new ParameterizedMessage("failed to invoke onTranslogRecovery listener"), ex); + exceptionList.add(ex); + } + } + ExceptionsHelper.maybeThrowRuntimeAndSuppress(exceptionList); + } + + @Override + public void onBeginTranslogRecovery() { + List exceptionList = new ArrayList<>(listeners.size()); + for (TranslogEventListener listener : listeners) { + try { + listener.onBeginTranslogRecovery(); + } catch (Exception ex) { + logger.warn(() -> new ParameterizedMessage("failed to invoke onBeginTranslogRecovery listener"), ex); + exceptionList.add(ex); + } + } + ExceptionsHelper.maybeThrowRuntimeAndSuppress(exceptionList); + } + + @Override + public void onFailure(String reason, Exception e) { + List exceptionList = new ArrayList<>(listeners.size()); + for (TranslogEventListener listener : listeners) { + try { + listener.onFailure(reason, e); + } catch (Exception ex) { + logger.warn(() -> new ParameterizedMessage("failed to invoke onFailure listener"), ex); + exceptionList.add(ex); + } + } + ExceptionsHelper.maybeThrowRuntimeAndSuppress(exceptionList); + } + + @Override + public void onTragicFailure(AlreadyClosedException e) { + List exceptionList = new ArrayList<>(listeners.size()); + for (TranslogEventListener listener : listeners) { + try { + listener.onTragicFailure(e); + } catch (Exception ex) { + logger.warn(() -> new ParameterizedMessage("failed to invoke onTragicFailure listener"), ex); + exceptionList.add(ex); + } + } + ExceptionsHelper.maybeThrowRuntimeAndSuppress(exceptionList); + } +} diff --git a/server/src/main/java/org/opensearch/index/translog/listener/TranslogEventListener.java b/server/src/main/java/org/opensearch/index/translog/listener/TranslogEventListener.java new file mode 100644 index 0000000000000..1862b4b9a62b7 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/translog/listener/TranslogEventListener.java @@ -0,0 +1,50 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.translog.listener; + +import org.apache.lucene.store.AlreadyClosedException; + +/** + * The listener that gets fired on events related to {@link org.opensearch.index.translog.TranslogManager} + * + * @opensearch.internal + */ +public interface TranslogEventListener { + + TranslogEventListener NOOP_TRANSLOG_EVENT_LISTENER = new TranslogEventListener() { + }; + + /** + * Invoked after translog sync operations + */ + default void onAfterTranslogSync() {} + + /** + * Invoked after recovering operations from translog + */ + default void onAfterTranslogRecovery() {} + + /** + * Invoked before recovering operations from translog + */ + default void onBeginTranslogRecovery() {} + + /** + * Invoked when translog operations run into accessing an already closed resource + * @param ex the exception thrown when accessing a closed resource + */ + default void onTragicFailure(AlreadyClosedException ex) {} + + /** + * Invoked when translog operations run into any other failure + * @param reason the failure reason + * @param ex the failure exception + */ + default void onFailure(String reason, Exception ex) {} +} diff --git a/server/src/main/java/org/opensearch/index/translog/listener/package-info.java b/server/src/main/java/org/opensearch/index/translog/listener/package-info.java new file mode 100644 index 0000000000000..bfb2415881c10 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/translog/listener/package-info.java @@ -0,0 +1,11 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +/** + * Provides mechanism to listen into translog operations + */ +package org.opensearch.index.translog.listener; diff --git a/server/src/main/java/org/opensearch/indices/IndicesModule.java b/server/src/main/java/org/opensearch/indices/IndicesModule.java index 0cb2ff958c787..29ff507ad9fcf 100644 --- a/server/src/main/java/org/opensearch/indices/IndicesModule.java +++ b/server/src/main/java/org/opensearch/indices/IndicesModule.java @@ -282,6 +282,8 @@ protected void configure() { bind(RetentionLeaseSyncer.class).asEagerSingleton(); if (FeatureFlags.isEnabled(FeatureFlags.REPLICATION_TYPE)) { bind(SegmentReplicationCheckpointPublisher.class).asEagerSingleton(); + } else { + bind(SegmentReplicationCheckpointPublisher.class).toInstance(SegmentReplicationCheckpointPublisher.EMPTY); } } diff --git a/server/src/main/java/org/opensearch/indices/RunUnderPrimaryPermit.java b/server/src/main/java/org/opensearch/indices/RunUnderPrimaryPermit.java new file mode 100644 index 0000000000000..29cac1601dc67 --- /dev/null +++ b/server/src/main/java/org/opensearch/indices/RunUnderPrimaryPermit.java @@ -0,0 +1,72 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices; + +import org.apache.logging.log4j.Logger; +import org.opensearch.action.ActionListener; +import org.opensearch.common.lease.Releasable; +import org.opensearch.common.util.CancellableThreads; +import org.opensearch.common.util.concurrent.FutureUtils; +import org.opensearch.index.shard.IndexShard; +import org.opensearch.index.shard.IndexShardRelocatedException; +import org.opensearch.threadpool.ThreadPool; + +import java.util.concurrent.CompletableFuture; + +/** + * Execute a Runnable after acquiring the primary's operation permit. + * + * @opensearch.internal + */ +public final class RunUnderPrimaryPermit { + + public static void run( + CancellableThreads.Interruptible runnable, + String reason, + IndexShard primary, + CancellableThreads cancellableThreads, + Logger logger + ) { + cancellableThreads.execute(() -> { + CompletableFuture permit = new CompletableFuture<>(); + final ActionListener onAcquired = new ActionListener<>() { + @Override + public void onResponse(Releasable releasable) { + if (permit.complete(releasable) == false) { + releasable.close(); + } + } + + @Override + public void onFailure(Exception e) { + permit.completeExceptionally(e); + } + }; + primary.acquirePrimaryOperationPermit(onAcquired, ThreadPool.Names.SAME, reason); + try (Releasable ignored = FutureUtils.get(permit)) { + // check that the IndexShard still has the primary authority. This needs to be checked under operation permit to prevent + // races, as IndexShard will switch its authority only when it holds all operation permits, see IndexShard.relocated() + if (primary.isRelocatedPrimary()) { + throw new IndexShardRelocatedException(primary.shardId()); + } + runnable.run(); + } finally { + // just in case we got an exception (likely interrupted) while waiting for the get + permit.whenComplete((r, e) -> { + if (r != null) { + r.close(); + } + if (e != null) { + logger.trace("suppressing exception on completion (it was already bubbled up or the operation was aborted)", e); + } + }); + } + }); + } +} diff --git a/server/src/main/java/org/opensearch/indices/recovery/FileChunkWriter.java b/server/src/main/java/org/opensearch/indices/recovery/FileChunkWriter.java new file mode 100644 index 0000000000000..cb43af3b82e09 --- /dev/null +++ b/server/src/main/java/org/opensearch/indices/recovery/FileChunkWriter.java @@ -0,0 +1,31 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.recovery; + +import org.opensearch.action.ActionListener; +import org.opensearch.common.bytes.BytesReference; +import org.opensearch.index.store.StoreFileMetadata; + +/** + * Writes a partial file chunk to the target store. + * + * @opensearch.internal + */ +@FunctionalInterface +public interface FileChunkWriter { + + void writeFileChunk( + StoreFileMetadata fileMetadata, + long position, + BytesReference content, + boolean lastChunk, + int totalTranslogOps, + ActionListener listener + ); +} diff --git a/server/src/main/java/org/opensearch/indices/recovery/RecoverySourceHandler.java b/server/src/main/java/org/opensearch/indices/recovery/RecoverySourceHandler.java index 0870fd4ca9295..9e219db5a4c96 100644 --- a/server/src/main/java/org/opensearch/indices/recovery/RecoverySourceHandler.java +++ b/server/src/main/java/org/opensearch/indices/recovery/RecoverySourceHandler.java @@ -33,17 +33,13 @@ package org.opensearch.indices.recovery; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.lucene.index.CorruptIndexException; import org.apache.lucene.index.IndexCommit; import org.apache.lucene.index.IndexFormatTooNewException; import org.apache.lucene.index.IndexFormatTooOldException; -import org.apache.lucene.store.IOContext; -import org.apache.lucene.store.IndexInput; import org.apache.lucene.store.RateLimiter; import org.apache.lucene.util.ArrayUtil; import org.apache.lucene.util.SetOnce; -import org.opensearch.ExceptionsHelper; import org.opensearch.LegacyESVersion; import org.opensearch.action.ActionListener; import org.opensearch.action.ActionRunnable; @@ -55,13 +51,10 @@ import org.opensearch.cluster.routing.ShardRouting; import org.opensearch.common.CheckedRunnable; import org.opensearch.common.StopWatch; -import org.opensearch.common.bytes.BytesArray; -import org.opensearch.common.bytes.BytesReference; import org.opensearch.common.concurrent.GatedCloseable; import org.opensearch.common.lease.Releasable; import org.opensearch.common.lease.Releasables; import org.opensearch.common.logging.Loggers; -import org.opensearch.common.lucene.store.InputStreamIndexInput; import org.opensearch.common.unit.ByteSizeValue; import org.opensearch.common.unit.TimeValue; import org.opensearch.common.util.CancellableThreads; @@ -77,26 +70,22 @@ import org.opensearch.index.seqno.SequenceNumbers; import org.opensearch.index.shard.IndexShard; import org.opensearch.index.shard.IndexShardClosedException; -import org.opensearch.index.shard.IndexShardRelocatedException; import org.opensearch.index.shard.IndexShardState; import org.opensearch.index.store.Store; import org.opensearch.index.store.StoreFileMetadata; import org.opensearch.index.translog.Translog; +import org.opensearch.indices.RunUnderPrimaryPermit; +import org.opensearch.indices.replication.SegmentFileTransferHandler; import org.opensearch.threadpool.ThreadPool; -import org.opensearch.transport.RemoteTransportException; import org.opensearch.transport.Transports; import java.io.Closeable; import java.io.IOException; import java.util.ArrayList; -import java.util.Arrays; import java.util.Collections; import java.util.Comparator; -import java.util.Deque; import java.util.List; import java.util.Locale; -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.ConcurrentLinkedDeque; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; @@ -128,13 +117,13 @@ public class RecoverySourceHandler { private final StartRecoveryRequest request; private final int chunkSizeInBytes; private final RecoveryTargetHandler recoveryTarget; - private final int maxConcurrentFileChunks; private final int maxConcurrentOperations; private final ThreadPool threadPool; private final CancellableThreads cancellableThreads = new CancellableThreads(); private final List resources = new CopyOnWriteArrayList<>(); private final ListenableFuture future = new ListenableFuture<>(); public static final String PEER_RECOVERY_NAME = "peer-recovery"; + private final SegmentFileTransferHandler transferHandler; public RecoverySourceHandler( IndexShard shard, @@ -145,15 +134,24 @@ public RecoverySourceHandler( int maxConcurrentFileChunks, int maxConcurrentOperations ) { + this.logger = Loggers.getLogger(RecoverySourceHandler.class, request.shardId(), "recover to " + request.targetNode().getName()); + this.transferHandler = new SegmentFileTransferHandler( + shard, + request.targetNode(), + recoveryTarget, + logger, + threadPool, + cancellableThreads, + fileChunkSizeInBytes, + maxConcurrentFileChunks + ); this.shard = shard; - this.recoveryTarget = recoveryTarget; this.threadPool = threadPool; this.request = request; + this.recoveryTarget = recoveryTarget; this.shardId = this.request.shardId().id(); - this.logger = Loggers.getLogger(getClass(), request.shardId(), "recover to " + request.targetNode().getName()); this.chunkSizeInBytes = fileChunkSizeInBytes; // if the target is on an old version, it won't be able to handle out-of-order file chunks. - this.maxConcurrentFileChunks = maxConcurrentFileChunks; this.maxConcurrentOperations = maxConcurrentOperations; } @@ -192,7 +190,7 @@ public void recoverToTarget(ActionListener listener) { final SetOnce retentionLeaseRef = new SetOnce<>(); - runUnderPrimaryPermit(() -> { + RunUnderPrimaryPermit.run(() -> { final IndexShardRoutingTable routingTable = shard.getReplicationGroup().getRoutingTable(); ShardRouting targetShardRouting = routingTable.getByAllocationId(request.targetAllocationId()); if (targetShardRouting == null) { @@ -286,7 +284,7 @@ && isTargetSameHistory() }); final StepListener deleteRetentionLeaseStep = new StepListener<>(); - runUnderPrimaryPermit(() -> { + RunUnderPrimaryPermit.run(() -> { try { // If the target previously had a copy of this shard then a file-based recovery might move its global // checkpoint backwards. We must therefore remove any existing retention lease so that we can create a @@ -332,7 +330,7 @@ && isTargetSameHistory() * make sure to do this before sampling the max sequence number in the next step, to ensure that we send * all documents up to maxSeqNo in phase2. */ - runUnderPrimaryPermit( + RunUnderPrimaryPermit.run( () -> shard.initiateTracking(request.targetAllocationId()), shardId + " initiating tracking of " + request.targetAllocationId(), shard, @@ -420,50 +418,6 @@ private int countNumberOfHistoryOperations(long startingSeqNo) throws IOExceptio return shard.countNumberOfHistoryOperations(PEER_RECOVERY_NAME, startingSeqNo, Long.MAX_VALUE); } - static void runUnderPrimaryPermit( - CancellableThreads.Interruptible runnable, - String reason, - IndexShard primary, - CancellableThreads cancellableThreads, - Logger logger - ) { - cancellableThreads.execute(() -> { - CompletableFuture permit = new CompletableFuture<>(); - final ActionListener onAcquired = new ActionListener() { - @Override - public void onResponse(Releasable releasable) { - if (permit.complete(releasable) == false) { - releasable.close(); - } - } - - @Override - public void onFailure(Exception e) { - permit.completeExceptionally(e); - } - }; - primary.acquirePrimaryOperationPermit(onAcquired, ThreadPool.Names.SAME, reason); - try (Releasable ignored = FutureUtils.get(permit)) { - // check that the IndexShard still has the primary authority. This needs to be checked under operation permit to prevent - // races, as IndexShard will switch its authority only when it holds all operation permits, see IndexShard.relocated() - if (primary.isRelocatedPrimary()) { - throw new IndexShardRelocatedException(primary.shardId()); - } - runnable.run(); - } finally { - // just in case we got an exception (likely interrupted) while waiting for the get - permit.whenComplete((r, e) -> { - if (r != null) { - r.close(); - } - if (e != null) { - logger.trace("suppressing exception on completion (it was already bubbled up or the operation was aborted)", e); - } - }); - } - }); - } - /** * Increases the store reference and returns a {@link Releasable} that will decrease the store reference using the generic thread pool. * We must never release the store using an interruptible thread as we can risk invalidating the node lock. @@ -708,8 +662,19 @@ void phase1(IndexCommit snapshot, long startingSeqNo, IntSupplier translogOps, A } } + void sendFiles(Store store, StoreFileMetadata[] files, IntSupplier translogOps, ActionListener listener) { + final MultiChunkTransfer transfer = transferHandler.createTransfer( + store, + files, + translogOps, + listener + ); + resources.add(transfer); + transfer.start(); + } + void createRetentionLease(final long startingSeqNo, ActionListener listener) { - runUnderPrimaryPermit(() -> { + RunUnderPrimaryPermit.run(() -> { // Clone the peer recovery retention lease belonging to the source shard. We are retaining history between the the local // checkpoint of the safe commit we're creating and this lease's retained seqno with the retention lock, and by cloning an // existing lease we (approximately) know that all our peers are also retaining history as requested by the cloned lease. If @@ -983,7 +948,7 @@ void finalizeRecovery(long targetLocalCheckpoint, long trimAboveSeqNo, ActionLis * marking the shard as in-sync. If the relocation handoff holds all the permits then after the handoff completes and we acquire * the permit then the state of the shard will be relocated and this recovery will fail. */ - runUnderPrimaryPermit( + RunUnderPrimaryPermit.run( () -> shard.markAllocationIdAsInSync(request.targetAllocationId(), targetLocalCheckpoint), shardId + " marking " + request.targetAllocationId() + " as in sync", shard, @@ -995,7 +960,7 @@ void finalizeRecovery(long targetLocalCheckpoint, long trimAboveSeqNo, ActionLis cancellableThreads.checkForCancel(); recoveryTarget.finalizeRecovery(globalCheckpoint, trimAboveSeqNo, finalizeListener); finalizeListener.whenComplete(r -> { - runUnderPrimaryPermit( + RunUnderPrimaryPermit.run( () -> shard.updateGlobalCheckpointForShard(request.targetAllocationId(), globalCheckpoint), shardId + " updating " + request.targetAllocationId() + "'s global checkpoint", shard, @@ -1056,121 +1021,6 @@ public String toString() { + '}'; } - /** - * A file chunk from the recovery source - * - * @opensearch.internal - */ - private static class FileChunk implements MultiChunkTransfer.ChunkRequest, Releasable { - final StoreFileMetadata md; - final BytesReference content; - final long position; - final boolean lastChunk; - final Releasable onClose; - - FileChunk(StoreFileMetadata md, BytesReference content, long position, boolean lastChunk, Releasable onClose) { - this.md = md; - this.content = content; - this.position = position; - this.lastChunk = lastChunk; - this.onClose = onClose; - } - - @Override - public boolean lastChunk() { - return lastChunk; - } - - @Override - public void close() { - onClose.close(); - } - } - - void sendFiles(Store store, StoreFileMetadata[] files, IntSupplier translogOps, ActionListener listener) { - ArrayUtil.timSort(files, Comparator.comparingLong(StoreFileMetadata::length)); // send smallest first - - final MultiChunkTransfer multiFileSender = new MultiChunkTransfer( - logger, - threadPool.getThreadContext(), - listener, - maxConcurrentFileChunks, - Arrays.asList(files) - ) { - - final Deque buffers = new ConcurrentLinkedDeque<>(); - InputStreamIndexInput currentInput = null; - long offset = 0; - - @Override - protected void onNewResource(StoreFileMetadata md) throws IOException { - offset = 0; - IOUtils.close(currentInput, () -> currentInput = null); - final IndexInput indexInput = store.directory().openInput(md.name(), IOContext.READONCE); - currentInput = new InputStreamIndexInput(indexInput, md.length()) { - @Override - public void close() throws IOException { - IOUtils.close(indexInput, super::close); // InputStreamIndexInput's close is a noop - } - }; - } - - private byte[] acquireBuffer() { - final byte[] buffer = buffers.pollFirst(); - if (buffer != null) { - return buffer; - } - return new byte[chunkSizeInBytes]; - } - - @Override - protected FileChunk nextChunkRequest(StoreFileMetadata md) throws IOException { - assert Transports.assertNotTransportThread("read file chunk"); - cancellableThreads.checkForCancel(); - final byte[] buffer = acquireBuffer(); - final int bytesRead = currentInput.read(buffer); - if (bytesRead == -1) { - throw new CorruptIndexException("file truncated; length=" + md.length() + " offset=" + offset, md.name()); - } - final boolean lastChunk = offset + bytesRead == md.length(); - final FileChunk chunk = new FileChunk( - md, - new BytesArray(buffer, 0, bytesRead), - offset, - lastChunk, - () -> buffers.addFirst(buffer) - ); - offset += bytesRead; - return chunk; - } - - @Override - protected void executeChunkRequest(FileChunk request, ActionListener listener) { - cancellableThreads.checkForCancel(); - recoveryTarget.writeFileChunk( - request.md, - request.position, - request.content, - request.lastChunk, - translogOps.getAsInt(), - ActionListener.runBefore(listener, request::close) - ); - } - - @Override - protected void handleError(StoreFileMetadata md, Exception e) throws Exception { - handleErrorOnSendFiles(store, e, new StoreFileMetadata[] { md }); - } - - @Override - public void close() throws IOException { - IOUtils.close(currentInput, () -> currentInput = null); - } - }; - resources.add(multiFileSender); - multiFileSender.start(); - } - private void cleanFiles( Store store, Store.MetadataSnapshot sourceMetadata, @@ -1194,52 +1044,9 @@ private void cleanFiles( ActionListener.delegateResponse(listener, (l, e) -> ActionListener.completeWith(l, () -> { StoreFileMetadata[] mds = StreamSupport.stream(sourceMetadata.spliterator(), false).toArray(StoreFileMetadata[]::new); ArrayUtil.timSort(mds, Comparator.comparingLong(StoreFileMetadata::length)); // check small files first - handleErrorOnSendFiles(store, e, mds); + transferHandler.handleErrorOnSendFiles(store, e, mds); throw e; })) ); } - - private void handleErrorOnSendFiles(Store store, Exception e, StoreFileMetadata[] mds) throws Exception { - final IOException corruptIndexException = ExceptionsHelper.unwrapCorruption(e); - assert Transports.assertNotTransportThread(RecoverySourceHandler.this + "[handle error on send/clean files]"); - if (corruptIndexException != null) { - Exception localException = null; - for (StoreFileMetadata md : mds) { - cancellableThreads.checkForCancel(); - logger.debug("checking integrity for file {} after remove corruption exception", md); - if (store.checkIntegrityNoException(md) == false) { // we are corrupted on the primary -- fail! - logger.warn("{} Corrupted file detected {} checksum mismatch", shardId, md); - if (localException == null) { - localException = corruptIndexException; - } - failEngine(corruptIndexException); - } - } - if (localException != null) { - throw localException; - } else { // corruption has happened on the way to replica - RemoteTransportException remoteException = new RemoteTransportException( - "File corruption occurred on recovery but checksums are ok", - null - ); - remoteException.addSuppressed(e); - logger.warn( - () -> new ParameterizedMessage( - "{} Remote file corruption on node {}, recovering {}. local checksum OK", - shardId, - request.targetNode(), - mds - ), - corruptIndexException - ); - throw remoteException; - } - } - throw e; - } - - protected void failEngine(IOException cause) { - shard.failShard("recovery", cause); - } } diff --git a/server/src/main/java/org/opensearch/indices/recovery/RecoveryTarget.java b/server/src/main/java/org/opensearch/indices/recovery/RecoveryTarget.java index 1735bb015c90c..426409f7a5b65 100644 --- a/server/src/main/java/org/opensearch/indices/recovery/RecoveryTarget.java +++ b/server/src/main/java/org/opensearch/indices/recovery/RecoveryTarget.java @@ -77,9 +77,7 @@ public class RecoveryTarget extends ReplicationTarget implements RecoveryTargetH private static final String RECOVERY_PREFIX = "recovery."; private final DiscoveryNode sourceNode; - private final CancellableThreads cancellableThreads; protected final MultiFileWriter multiFileWriter; - protected final Store store; // latch that can be used to blockingly wait for RecoveryTarget to be closed private final CountDownLatch closedLatch = new CountDownLatch(1); @@ -93,13 +91,10 @@ public class RecoveryTarget extends ReplicationTarget implements RecoveryTargetH */ public RecoveryTarget(IndexShard indexShard, DiscoveryNode sourceNode, ReplicationListener listener) { super("recovery_status", indexShard, indexShard.recoveryState().getIndex(), listener); - this.cancellableThreads = new CancellableThreads(); this.sourceNode = sourceNode; indexShard.recoveryStats().incCurrentAsTarget(); - this.store = indexShard.store(); final String tempFilePrefix = getPrefix() + UUIDs.randomBase64UUID() + "."; this.multiFileWriter = new MultiFileWriter(indexShard.store(), stateIndex, tempFilePrefix, logger, this::ensureRefCount); - store.incRef(); } /** @@ -132,11 +127,6 @@ public CancellableThreads cancellableThreads() { return cancellableThreads; } - public Store store() { - ensureRefCount(); - return store; - } - public String description() { return "recovery from " + source(); } @@ -258,14 +248,6 @@ protected void onDone() { indexShard.postRecovery("peer recovery done"); } - /** - * if {@link #cancellableThreads()} was used, the threads will be interrupted. - */ - @Override - protected void onCancel(String reason) { - cancellableThreads.cancel(reason); - } - /*** Implementation of {@link RecoveryTargetHandler } */ @Override diff --git a/server/src/main/java/org/opensearch/indices/recovery/RecoveryTargetHandler.java b/server/src/main/java/org/opensearch/indices/recovery/RecoveryTargetHandler.java index 84b6ec170d3f7..c750c0e88364b 100644 --- a/server/src/main/java/org/opensearch/indices/recovery/RecoveryTargetHandler.java +++ b/server/src/main/java/org/opensearch/indices/recovery/RecoveryTargetHandler.java @@ -32,11 +32,9 @@ package org.opensearch.indices.recovery; import org.opensearch.action.ActionListener; -import org.opensearch.common.bytes.BytesReference; import org.opensearch.index.seqno.ReplicationTracker; import org.opensearch.index.seqno.RetentionLeases; import org.opensearch.index.store.Store; -import org.opensearch.index.store.StoreFileMetadata; import org.opensearch.index.translog.Translog; import java.util.List; @@ -46,7 +44,7 @@ * * @opensearch.internal */ -public interface RecoveryTargetHandler { +public interface RecoveryTargetHandler extends FileChunkWriter { /** * Prepares the target to receive translog operations, after all file have been copied @@ -123,15 +121,5 @@ void receiveFileInfo( */ void cleanFiles(int totalTranslogOps, long globalCheckpoint, Store.MetadataSnapshot sourceMetadata, ActionListener listener); - /** writes a partial file chunk to the target store */ - void writeFileChunk( - StoreFileMetadata fileMetadata, - long position, - BytesReference content, - boolean lastChunk, - int totalTranslogOps, - ActionListener listener - ); - default void cancel() {} } diff --git a/server/src/main/java/org/opensearch/indices/recovery/RemoteRecoveryTargetHandler.java b/server/src/main/java/org/opensearch/indices/recovery/RemoteRecoveryTargetHandler.java index ab6466feb11f8..e7ae62c1bee7d 100644 --- a/server/src/main/java/org/opensearch/indices/recovery/RemoteRecoveryTargetHandler.java +++ b/server/src/main/java/org/opensearch/indices/recovery/RemoteRecoveryTargetHandler.java @@ -34,8 +34,6 @@ import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; -import org.apache.lucene.store.RateLimiter; -import org.opensearch.OpenSearchException; import org.opensearch.action.ActionListener; import org.opensearch.cluster.node.DiscoveryNode; import org.opensearch.common.bytes.BytesReference; @@ -46,12 +44,12 @@ import org.opensearch.index.store.Store; import org.opensearch.index.store.StoreFileMetadata; import org.opensearch.index.translog.Translog; +import org.opensearch.indices.replication.RemoteSegmentFileChunkWriter; import org.opensearch.transport.EmptyTransportResponseHandler; import org.opensearch.transport.TransportRequestOptions; import org.opensearch.transport.TransportResponse; import org.opensearch.transport.TransportService; -import java.io.IOException; import java.util.List; import java.util.concurrent.atomic.AtomicLong; import java.util.function.Consumer; @@ -72,13 +70,10 @@ public class RemoteRecoveryTargetHandler implements RecoveryTargetHandler { private final RecoverySettings recoverySettings; private final TransportRequestOptions translogOpsRequestOptions; - private final TransportRequestOptions fileChunkRequestOptions; - private final AtomicLong bytesSinceLastPause = new AtomicLong(); private final AtomicLong requestSeqNoGenerator = new AtomicLong(0); - - private final Consumer onSourceThrottle; private final RetryableTransportClient retryableTransportClient; + private final RemoteSegmentFileChunkWriter fileChunkWriter; public RemoteRecoveryTargetHandler( long recoveryId, @@ -102,15 +97,19 @@ public RemoteRecoveryTargetHandler( this.shardId = shardId; this.targetNode = targetNode; this.recoverySettings = recoverySettings; - this.onSourceThrottle = onSourceThrottle; this.translogOpsRequestOptions = TransportRequestOptions.builder() .withType(TransportRequestOptions.Type.RECOVERY) .withTimeout(recoverySettings.internalActionLongTimeout()) .build(); - this.fileChunkRequestOptions = TransportRequestOptions.builder() - .withType(TransportRequestOptions.Type.RECOVERY) - .withTimeout(recoverySettings.internalActionTimeout()) - .build(); + this.fileChunkWriter = new RemoteSegmentFileChunkWriter( + recoveryId, + recoverySettings, + retryableTransportClient, + shardId, + PeerRecoveryTargetService.Actions.FILE_CHUNK, + requestSeqNoGenerator, + onSourceThrottle + ); } public DiscoveryNode targetNode() { @@ -235,6 +234,11 @@ public void cleanFiles( retryableTransportClient.executeRetryableAction(action, request, responseListener, reader); } + @Override + public void cancel() { + retryableTransportClient.cancel(); + } + @Override public void writeFileChunk( StoreFileMetadata fileMetadata, @@ -244,57 +248,6 @@ public void writeFileChunk( int totalTranslogOps, ActionListener listener ) { - // Pause using the rate limiter, if desired, to throttle the recovery - final long throttleTimeInNanos; - // always fetch the ratelimiter - it might be updated in real-time on the recovery settings - final RateLimiter rl = recoverySettings.rateLimiter(); - if (rl != null) { - long bytes = bytesSinceLastPause.addAndGet(content.length()); - if (bytes > rl.getMinPauseCheckBytes()) { - // Time to pause - bytesSinceLastPause.addAndGet(-bytes); - try { - throttleTimeInNanos = rl.pause(bytes); - onSourceThrottle.accept(throttleTimeInNanos); - } catch (IOException e) { - throw new OpenSearchException("failed to pause recovery", e); - } - } else { - throttleTimeInNanos = 0; - } - } else { - throttleTimeInNanos = 0; - } - - final String action = PeerRecoveryTargetService.Actions.FILE_CHUNK; - final long requestSeqNo = requestSeqNoGenerator.getAndIncrement(); - /* we send estimateTotalOperations with every request since we collect stats on the target and that way we can - * see how many translog ops we accumulate while copying files across the network. A future optimization - * would be in to restart file copy again (new deltas) if we have too many translog ops are piling up. - */ - final FileChunkRequest request = new FileChunkRequest( - recoveryId, - requestSeqNo, - shardId, - fileMetadata, - position, - content, - lastChunk, - totalTranslogOps, - throttleTimeInNanos - ); - final Writeable.Reader reader = in -> TransportResponse.Empty.INSTANCE; - retryableTransportClient.executeRetryableAction( - action, - request, - fileChunkRequestOptions, - ActionListener.map(listener, r -> null), - reader - ); - } - - @Override - public void cancel() { - retryableTransportClient.cancel(); + fileChunkWriter.writeFileChunk(fileMetadata, position, content, lastChunk, totalTranslogOps, listener); } } diff --git a/server/src/main/java/org/opensearch/indices/recovery/RetryableTransportClient.java b/server/src/main/java/org/opensearch/indices/recovery/RetryableTransportClient.java index bc10cc80b7fdc..a7113fb3fee5c 100644 --- a/server/src/main/java/org/opensearch/indices/recovery/RetryableTransportClient.java +++ b/server/src/main/java/org/opensearch/indices/recovery/RetryableTransportClient.java @@ -74,7 +74,7 @@ public void executeRetryableAction( executeRetryableAction(action, request, options, actionListener, reader); } - void executeRetryableAction( + public void executeRetryableAction( String action, TransportRequest request, TransportRequestOptions options, diff --git a/server/src/main/java/org/opensearch/indices/replication/GetSegmentFilesRequest.java b/server/src/main/java/org/opensearch/indices/replication/GetSegmentFilesRequest.java index 21749d3fe7d8a..daad33ed93f28 100644 --- a/server/src/main/java/org/opensearch/indices/replication/GetSegmentFilesRequest.java +++ b/server/src/main/java/org/opensearch/indices/replication/GetSegmentFilesRequest.java @@ -57,4 +57,8 @@ public void writeTo(StreamOutput out) throws IOException { public ReplicationCheckpoint getCheckpoint() { return checkpoint; } + + public List getFilesToFetch() { + return filesToFetch; + } } diff --git a/server/src/main/java/org/opensearch/indices/replication/OngoingSegmentReplications.java b/server/src/main/java/org/opensearch/indices/replication/OngoingSegmentReplications.java new file mode 100644 index 0000000000000..6302d364fc6d1 --- /dev/null +++ b/server/src/main/java/org/opensearch/indices/replication/OngoingSegmentReplications.java @@ -0,0 +1,230 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.replication; + +import org.opensearch.OpenSearchException; +import org.opensearch.action.ActionListener; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.common.util.concurrent.ConcurrentCollections; +import org.opensearch.index.IndexService; +import org.opensearch.index.shard.IndexShard; +import org.opensearch.index.shard.ShardId; +import org.opensearch.indices.IndicesService; +import org.opensearch.indices.recovery.FileChunkWriter; +import org.opensearch.indices.recovery.RecoverySettings; +import org.opensearch.indices.replication.checkpoint.ReplicationCheckpoint; +import org.opensearch.indices.replication.common.CopyState; + +import java.io.IOException; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +/** + * Manages references to ongoing segrep events on a node. + * Each replica will have a new {@link SegmentReplicationSourceHandler} created when starting replication. + * CopyStates will be cached for reuse between replicas and only released when all replicas have finished copying segments. + * + * @opensearch.internal + */ +class OngoingSegmentReplications { + + private final RecoverySettings recoverySettings; + private final IndicesService indicesService; + private final Map copyStateMap; + private final Map nodesToHandlers; + + /** + * Constructor. + * + * @param indicesService {@link IndicesService} + * @param recoverySettings {@link RecoverySettings} + */ + OngoingSegmentReplications(IndicesService indicesService, RecoverySettings recoverySettings) { + this.indicesService = indicesService; + this.recoverySettings = recoverySettings; + this.copyStateMap = Collections.synchronizedMap(new HashMap<>()); + this.nodesToHandlers = ConcurrentCollections.newConcurrentMap(); + } + + /** + * Operations on the {@link #copyStateMap} member. + */ + + /** + * A synchronized method that checks {@link #copyStateMap} for the given {@link ReplicationCheckpoint} key + * and returns the cached value if one is present. If the key is not present, a {@link CopyState} + * object is constructed and stored in the map before being returned. + */ + synchronized CopyState getCachedCopyState(ReplicationCheckpoint checkpoint) throws IOException { + if (isInCopyStateMap(checkpoint)) { + final CopyState copyState = fetchFromCopyStateMap(checkpoint); + // we incref the copyState for every replica that is using this checkpoint. + // decref will happen when copy completes. + copyState.incRef(); + return copyState; + } else { + // From the checkpoint's shard ID, fetch the IndexShard + ShardId shardId = checkpoint.getShardId(); + final IndexService indexService = indicesService.indexService(shardId.getIndex()); + final IndexShard indexShard = indexService.getShard(shardId.id()); + // build the CopyState object and cache it before returning + final CopyState copyState = new CopyState(checkpoint, indexShard); + + /** + * Use the checkpoint from the request as the key in the map, rather than + * the checkpoint from the created CopyState. This maximizes cache hits + * if replication targets make a request with an older checkpoint. + * Replication targets are expected to fetch the checkpoint in the response + * CopyState to bring themselves up to date. + */ + addToCopyStateMap(checkpoint, copyState); + return copyState; + } + } + + /** + * Start sending files to the replica. + * + * @param request {@link GetSegmentFilesRequest} + * @param listener {@link ActionListener} that resolves when sending files is complete. + */ + void startSegmentCopy(GetSegmentFilesRequest request, ActionListener listener) { + final DiscoveryNode node = request.getTargetNode(); + final SegmentReplicationSourceHandler handler = nodesToHandlers.get(node); + if (handler != null) { + if (handler.isReplicating()) { + throw new OpenSearchException( + "Replication to shard {}, on node {} has already started", + request.getCheckpoint().getShardId(), + request.getTargetNode() + ); + } + // update the given listener to release the CopyState before it resolves. + final ActionListener wrappedListener = ActionListener.runBefore(listener, () -> { + final SegmentReplicationSourceHandler sourceHandler = nodesToHandlers.remove(node); + if (sourceHandler != null) { + removeCopyState(sourceHandler.getCopyState()); + } + }); + handler.sendFiles(request, wrappedListener); + } else { + listener.onResponse(new GetSegmentFilesResponse(Collections.emptyList())); + } + } + + /** + * Cancel any ongoing replications for a given {@link DiscoveryNode} + * + * @param node {@link DiscoveryNode} node for which to cancel replication events. + */ + void cancelReplication(DiscoveryNode node) { + final SegmentReplicationSourceHandler handler = nodesToHandlers.remove(node); + if (handler != null) { + handler.cancel("Cancel on node left"); + removeCopyState(handler.getCopyState()); + } + } + + /** + * Prepare for a Replication event. This method constructs a {@link CopyState} holding files to be sent off of the current + * nodes's store. This state is intended to be sent back to Replicas before copy is initiated so the replica can perform a diff against its + * local store. It will then build a handler to orchestrate the segment copy that will be stored locally and started on a subsequent request from replicas + * with the list of required files. + * + * @param request {@link CheckpointInfoRequest} + * @param fileChunkWriter {@link FileChunkWriter} writer to handle sending files over the transport layer. + * @return {@link CopyState} the built CopyState for this replication event. + * @throws IOException - When there is an IO error building CopyState. + */ + CopyState prepareForReplication(CheckpointInfoRequest request, FileChunkWriter fileChunkWriter) throws IOException { + final CopyState copyState = getCachedCopyState(request.getCheckpoint()); + if (nodesToHandlers.putIfAbsent( + request.getTargetNode(), + createTargetHandler(request.getTargetNode(), copyState, fileChunkWriter) + ) != null) { + throw new OpenSearchException( + "Shard copy {} on node {} already replicating", + request.getCheckpoint().getShardId(), + request.getTargetNode() + ); + } + return copyState; + } + + /** + * Cancel all Replication events for the given shard, intended to be called when the current primary is shutting down. + * + * @param shard {@link IndexShard} + * @param reason {@link String} - Reason for the cancel + */ + synchronized void cancel(IndexShard shard, String reason) { + for (SegmentReplicationSourceHandler entry : nodesToHandlers.values()) { + if (entry.getCopyState().getShard().equals(shard)) { + entry.cancel(reason); + } + } + copyStateMap.clear(); + } + + /** + * Checks if the {@link #copyStateMap} has the input {@link ReplicationCheckpoint} + * as a key by invoking {@link Map#containsKey(Object)}. + */ + boolean isInCopyStateMap(ReplicationCheckpoint replicationCheckpoint) { + return copyStateMap.containsKey(replicationCheckpoint); + } + + int size() { + return nodesToHandlers.size(); + } + + int cachedCopyStateSize() { + return copyStateMap.size(); + } + + private SegmentReplicationSourceHandler createTargetHandler(DiscoveryNode node, CopyState copyState, FileChunkWriter fileChunkWriter) { + return new SegmentReplicationSourceHandler( + node, + fileChunkWriter, + copyState.getShard().getThreadPool(), + copyState, + Math.toIntExact(recoverySettings.getChunkSize().getBytes()), + recoverySettings.getMaxConcurrentFileChunks() + ); + } + + /** + * Adds the input {@link CopyState} object to {@link #copyStateMap}. + * The key is the CopyState's {@link ReplicationCheckpoint} object. + */ + private void addToCopyStateMap(ReplicationCheckpoint checkpoint, CopyState copyState) { + copyStateMap.putIfAbsent(checkpoint, copyState); + } + + /** + * Given a {@link ReplicationCheckpoint}, return the corresponding + * {@link CopyState} object, if any, from {@link #copyStateMap}. + */ + private CopyState fetchFromCopyStateMap(ReplicationCheckpoint replicationCheckpoint) { + return copyStateMap.get(replicationCheckpoint); + } + + /** + * Remove a CopyState. Intended to be called after a replication event completes. + * This method will remove a copyState from the copyStateMap only if its refCount hits 0. + * + * @param copyState {@link CopyState} + */ + private synchronized void removeCopyState(CopyState copyState) { + if (copyState.decRef() == true) { + copyStateMap.remove(copyState.getRequestedReplicationCheckpoint()); + } + } +} diff --git a/server/src/main/java/org/opensearch/indices/replication/RemoteSegmentFileChunkWriter.java b/server/src/main/java/org/opensearch/indices/replication/RemoteSegmentFileChunkWriter.java new file mode 100644 index 0000000000000..05f1c9d757e5c --- /dev/null +++ b/server/src/main/java/org/opensearch/indices/replication/RemoteSegmentFileChunkWriter.java @@ -0,0 +1,125 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.replication; + +import org.apache.lucene.store.RateLimiter; +import org.opensearch.OpenSearchException; +import org.opensearch.action.ActionListener; +import org.opensearch.common.bytes.BytesReference; +import org.opensearch.common.io.stream.Writeable; +import org.opensearch.index.shard.ShardId; +import org.opensearch.index.store.StoreFileMetadata; +import org.opensearch.indices.recovery.FileChunkRequest; +import org.opensearch.indices.recovery.RecoverySettings; +import org.opensearch.indices.recovery.RetryableTransportClient; +import org.opensearch.indices.recovery.FileChunkWriter; +import org.opensearch.transport.TransportRequestOptions; +import org.opensearch.transport.TransportResponse; + +import java.io.IOException; +import java.util.concurrent.atomic.AtomicLong; +import java.util.function.Consumer; + +/** + * This class handles sending file chunks over the transport layer to a target shard. + * + * @opensearch.internal + */ +public final class RemoteSegmentFileChunkWriter implements FileChunkWriter { + + private final AtomicLong requestSeqNoGenerator; + private final RetryableTransportClient retryableTransportClient; + private final ShardId shardId; + private final RecoverySettings recoverySettings; + private final long replicationId; + private final AtomicLong bytesSinceLastPause = new AtomicLong(); + private final TransportRequestOptions fileChunkRequestOptions; + private final Consumer onSourceThrottle; + private final String action; + + public RemoteSegmentFileChunkWriter( + long replicationId, + RecoverySettings recoverySettings, + RetryableTransportClient retryableTransportClient, + ShardId shardId, + String action, + AtomicLong requestSeqNoGenerator, + Consumer onSourceThrottle + ) { + this.replicationId = replicationId; + this.recoverySettings = recoverySettings; + this.retryableTransportClient = retryableTransportClient; + this.shardId = shardId; + this.requestSeqNoGenerator = requestSeqNoGenerator; + this.onSourceThrottle = onSourceThrottle; + this.fileChunkRequestOptions = TransportRequestOptions.builder() + .withType(TransportRequestOptions.Type.RECOVERY) + .withTimeout(recoverySettings.internalActionTimeout()) + .build(); + + this.action = action; + } + + @Override + public void writeFileChunk( + StoreFileMetadata fileMetadata, + long position, + BytesReference content, + boolean lastChunk, + int totalTranslogOps, + ActionListener listener + ) { + // Pause using the rate limiter, if desired, to throttle the recovery + final long throttleTimeInNanos; + // always fetch the ratelimiter - it might be updated in real-time on the recovery settings + final RateLimiter rl = recoverySettings.rateLimiter(); + if (rl != null) { + long bytes = bytesSinceLastPause.addAndGet(content.length()); + if (bytes > rl.getMinPauseCheckBytes()) { + // Time to pause + bytesSinceLastPause.addAndGet(-bytes); + try { + throttleTimeInNanos = rl.pause(bytes); + onSourceThrottle.accept(throttleTimeInNanos); + } catch (IOException e) { + throw new OpenSearchException("failed to pause recovery", e); + } + } else { + throttleTimeInNanos = 0; + } + } else { + throttleTimeInNanos = 0; + } + + final long requestSeqNo = requestSeqNoGenerator.getAndIncrement(); + /* we send estimateTotalOperations with every request since we collect stats on the target and that way we can + * see how many translog ops we accumulate while copying files across the network. A future optimization + * would be in to restart file copy again (new deltas) if we have too many translog ops are piling up. + */ + final FileChunkRequest request = new FileChunkRequest( + replicationId, + requestSeqNo, + shardId, + fileMetadata, + position, + content, + lastChunk, + totalTranslogOps, + throttleTimeInNanos + ); + final Writeable.Reader reader = in -> TransportResponse.Empty.INSTANCE; + retryableTransportClient.executeRetryableAction( + action, + request, + fileChunkRequestOptions, + ActionListener.map(listener, r -> null), + reader + ); + } +} diff --git a/server/src/main/java/org/opensearch/indices/replication/SegmentFileTransferHandler.java b/server/src/main/java/org/opensearch/indices/replication/SegmentFileTransferHandler.java new file mode 100644 index 0000000000000..e95c2c6470b4b --- /dev/null +++ b/server/src/main/java/org/opensearch/indices/replication/SegmentFileTransferHandler.java @@ -0,0 +1,239 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.replication; + +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.lucene.index.CorruptIndexException; +import org.apache.lucene.store.IOContext; +import org.apache.lucene.store.IndexInput; +import org.apache.lucene.util.ArrayUtil; +import org.opensearch.ExceptionsHelper; +import org.opensearch.action.ActionListener; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.common.bytes.BytesArray; +import org.opensearch.common.bytes.BytesReference; +import org.opensearch.common.lease.Releasable; +import org.opensearch.common.lucene.store.InputStreamIndexInput; +import org.opensearch.common.util.CancellableThreads; +import org.opensearch.core.internal.io.IOUtils; +import org.opensearch.index.shard.IndexShard; +import org.opensearch.index.store.Store; +import org.opensearch.index.store.StoreFileMetadata; +import org.opensearch.indices.recovery.FileChunkWriter; +import org.opensearch.indices.recovery.MultiChunkTransfer; +import org.opensearch.threadpool.ThreadPool; +import org.opensearch.transport.RemoteTransportException; +import org.opensearch.transport.Transports; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Comparator; +import java.util.Deque; +import java.util.concurrent.ConcurrentLinkedDeque; +import java.util.function.IntSupplier; + +/** + * SegmentFileSender handles building and starting a {@link MultiChunkTransfer} to orchestrate sending chunks to a given targetNode. + * This class delegates to a {@link FileChunkWriter} to handle the transport of chunks. + * + * @opensearch.internal + * // TODO: make this package-private after combining recovery and replication into single package. + */ +public final class SegmentFileTransferHandler { + + private final Logger logger; + private final IndexShard shard; + private final FileChunkWriter chunkWriter; + private final ThreadPool threadPool; + private final int chunkSizeInBytes; + private final int maxConcurrentFileChunks; + private final DiscoveryNode targetNode; + private final CancellableThreads cancellableThreads; + + public SegmentFileTransferHandler( + IndexShard shard, + DiscoveryNode targetNode, + FileChunkWriter chunkWriter, + Logger logger, + ThreadPool threadPool, + CancellableThreads cancellableThreads, + int fileChunkSizeInBytes, + int maxConcurrentFileChunks + ) { + this.shard = shard; + this.targetNode = targetNode; + this.chunkWriter = chunkWriter; + this.logger = logger; + this.threadPool = threadPool; + this.cancellableThreads = cancellableThreads; + this.chunkSizeInBytes = fileChunkSizeInBytes; + // if the target is on an old version, it won't be able to handle out-of-order file chunks. + this.maxConcurrentFileChunks = maxConcurrentFileChunks; + } + + /** + * Returns a closeable {@link MultiChunkTransfer} to initiate sending a list of files. + * Callers are responsible for starting the transfer and closing the resource. + * @param store {@link Store} + * @param files {@link StoreFileMetadata[]} + * @param translogOps {@link IntSupplier} + * @param listener {@link ActionListener} + * @return {@link MultiChunkTransfer} + */ + public MultiChunkTransfer createTransfer( + Store store, + StoreFileMetadata[] files, + IntSupplier translogOps, + ActionListener listener + ) { + ArrayUtil.timSort(files, Comparator.comparingLong(StoreFileMetadata::length)); // send smallest first + return new MultiChunkTransfer<>(logger, threadPool.getThreadContext(), listener, maxConcurrentFileChunks, Arrays.asList(files)) { + + final Deque buffers = new ConcurrentLinkedDeque<>(); + InputStreamIndexInput currentInput = null; + long offset = 0; + + @Override + protected void onNewResource(StoreFileMetadata md) throws IOException { + offset = 0; + IOUtils.close(currentInput, () -> currentInput = null); + final IndexInput indexInput = store.directory().openInput(md.name(), IOContext.READONCE); + currentInput = new InputStreamIndexInput(indexInput, md.length()) { + @Override + public void close() throws IOException { + IOUtils.close(indexInput, super::close); // InputStreamIndexInput's close is a noop + } + }; + } + + private byte[] acquireBuffer() { + final byte[] buffer = buffers.pollFirst(); + if (buffer != null) { + return buffer; + } + return new byte[chunkSizeInBytes]; + } + + @Override + protected FileChunk nextChunkRequest(StoreFileMetadata md) throws IOException { + assert Transports.assertNotTransportThread("read file chunk"); + cancellableThreads.checkForCancel(); + final byte[] buffer = acquireBuffer(); + final int bytesRead = currentInput.read(buffer); + if (bytesRead == -1) { + throw new CorruptIndexException("file truncated; length=" + md.length() + " offset=" + offset, md.name()); + } + final boolean lastChunk = offset + bytesRead == md.length(); + final FileChunk chunk = new FileChunk( + md, + new BytesArray(buffer, 0, bytesRead), + offset, + lastChunk, + () -> buffers.addFirst(buffer) + ); + offset += bytesRead; + return chunk; + } + + @Override + protected void executeChunkRequest(FileChunk request, ActionListener listener1) { + cancellableThreads.checkForCancel(); + chunkWriter.writeFileChunk( + request.md, + request.position, + request.content, + request.lastChunk, + translogOps.getAsInt(), + ActionListener.runBefore(listener1, request::close) + ); + } + + @Override + protected void handleError(StoreFileMetadata md, Exception e) throws Exception { + handleErrorOnSendFiles(store, e, new StoreFileMetadata[] { md }); + } + + @Override + public void close() throws IOException { + IOUtils.close(currentInput, () -> currentInput = null); + } + }; + } + + public void handleErrorOnSendFiles(Store store, Exception e, StoreFileMetadata[] mds) throws Exception { + final IOException corruptIndexException = ExceptionsHelper.unwrapCorruption(e); + assert Transports.assertNotTransportThread(this + "[handle error on send/clean files]"); + if (corruptIndexException != null) { + Exception localException = null; + for (StoreFileMetadata md : mds) { + cancellableThreads.checkForCancel(); + logger.debug("checking integrity for file {} after remove corruption exception", md); + if (store.checkIntegrityNoException(md) == false) { // we are corrupted on the primary -- fail! + logger.warn("{} Corrupted file detected {} checksum mismatch", shard.shardId(), md); + if (localException == null) { + localException = corruptIndexException; + } + shard.failShard("error sending files", corruptIndexException); + } + } + if (localException != null) { + throw localException; + } else { // corruption has happened on the way to replica + RemoteTransportException remoteException = new RemoteTransportException( + "File corruption occurred on recovery but checksums are ok", + null + ); + remoteException.addSuppressed(e); + logger.warn( + () -> new ParameterizedMessage( + "{} Remote file corruption on node {}, recovering {}. local checksum OK", + shard.shardId(), + targetNode, + mds + ), + corruptIndexException + ); + throw remoteException; + } + } + throw e; + } + + /** + * A file chunk from the recovery source + * + * @opensearch.internal + */ + public static final class FileChunk implements MultiChunkTransfer.ChunkRequest, Releasable { + final StoreFileMetadata md; + final BytesReference content; + final long position; + final boolean lastChunk; + final Releasable onClose; + + FileChunk(StoreFileMetadata md, BytesReference content, long position, boolean lastChunk, Releasable onClose) { + this.md = md; + this.content = content; + this.position = position; + this.lastChunk = lastChunk; + this.onClose = onClose; + } + + @Override + public boolean lastChunk() { + return lastChunk; + } + + @Override + public void close() { + onClose.close(); + } + } +} diff --git a/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationSourceHandler.java b/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationSourceHandler.java new file mode 100644 index 0000000000000..fdabd48c62929 --- /dev/null +++ b/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationSourceHandler.java @@ -0,0 +1,170 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.replication; + +import org.apache.logging.log4j.Logger; +import org.opensearch.OpenSearchException; +import org.opensearch.action.ActionListener; +import org.opensearch.action.StepListener; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.routing.IndexShardRoutingTable; +import org.opensearch.cluster.routing.ShardRouting; +import org.opensearch.common.logging.Loggers; +import org.opensearch.common.util.CancellableThreads; +import org.opensearch.common.util.concurrent.ListenableFuture; +import org.opensearch.common.util.concurrent.OpenSearchExecutors; +import org.opensearch.core.internal.io.IOUtils; +import org.opensearch.index.shard.IndexShard; +import org.opensearch.index.store.StoreFileMetadata; +import org.opensearch.indices.RunUnderPrimaryPermit; +import org.opensearch.indices.recovery.DelayRecoveryException; +import org.opensearch.indices.recovery.FileChunkWriter; +import org.opensearch.indices.recovery.MultiChunkTransfer; +import org.opensearch.indices.replication.common.CopyState; +import org.opensearch.threadpool.ThreadPool; +import org.opensearch.transport.Transports; + +import java.io.Closeable; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.concurrent.CopyOnWriteArrayList; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.Consumer; + +/** + * Orchestrates sending requested segment files to a target shard. + * + * @opensearch.internal + */ +class SegmentReplicationSourceHandler { + + private final IndexShard shard; + private final CopyState copyState; + private final SegmentFileTransferHandler segmentFileTransferHandler; + private final CancellableThreads cancellableThreads = new CancellableThreads(); + private final ListenableFuture future = new ListenableFuture<>(); + private final List resources = new CopyOnWriteArrayList<>(); + private final Logger logger; + private final AtomicBoolean isReplicating = new AtomicBoolean(); + + /** + * Constructor. + * + * @param targetNode - {@link DiscoveryNode} target node where files should be sent. + * @param writer {@link FileChunkWriter} implementation that sends file chunks over the transport layer. + * @param threadPool {@link ThreadPool} Thread pool. + * @param copyState {@link CopyState} CopyState holding segment file metadata. + * @param fileChunkSizeInBytes {@link Integer} + * @param maxConcurrentFileChunks {@link Integer} + */ + SegmentReplicationSourceHandler( + DiscoveryNode targetNode, + FileChunkWriter writer, + ThreadPool threadPool, + CopyState copyState, + int fileChunkSizeInBytes, + int maxConcurrentFileChunks + ) { + this.shard = copyState.getShard(); + this.logger = Loggers.getLogger( + SegmentReplicationSourceHandler.class, + copyState.getShard().shardId(), + "sending segments to " + targetNode.getName() + ); + this.segmentFileTransferHandler = new SegmentFileTransferHandler( + copyState.getShard(), + targetNode, + writer, + logger, + threadPool, + cancellableThreads, + fileChunkSizeInBytes, + maxConcurrentFileChunks + ); + this.copyState = copyState; + } + + /** + * Sends Segment files from the local node to the given target. + * + * @param request {@link GetSegmentFilesRequest} request object containing list of files to be sent. + * @param listener {@link ActionListener} that completes with the list of files sent. + */ + public synchronized void sendFiles(GetSegmentFilesRequest request, ActionListener listener) { + if (isReplicating.compareAndSet(false, true) == false) { + throw new OpenSearchException("Replication to {} is already running.", shard.shardId()); + } + future.addListener(listener, OpenSearchExecutors.newDirectExecutorService()); + final Closeable releaseResources = () -> IOUtils.close(resources); + try { + + final Consumer onFailure = e -> { + assert Transports.assertNotTransportThread(SegmentReplicationSourceHandler.this + "[onFailure]"); + IOUtils.closeWhileHandlingException(releaseResources, () -> future.onFailure(e)); + }; + + RunUnderPrimaryPermit.run(() -> { + final IndexShardRoutingTable routingTable = shard.getReplicationGroup().getRoutingTable(); + ShardRouting targetShardRouting = routingTable.getByAllocationId(request.getTargetAllocationId()); + if (targetShardRouting == null) { + logger.debug( + "delaying replication of {} as it is not listed as assigned to target node {}", + shard.shardId(), + request.getTargetNode() + ); + throw new DelayRecoveryException("source node does not have the shard listed in its state as allocated on the node"); + } + }, + shard.shardId() + " validating recovery target [" + request.getTargetAllocationId() + "] registered ", + shard, + cancellableThreads, + logger + ); + + final StepListener sendFileStep = new StepListener<>(); + Set storeFiles = new HashSet<>(Arrays.asList(shard.store().directory().listAll())); + final StoreFileMetadata[] storeFileMetadata = request.getFilesToFetch() + .stream() + .filter(file -> storeFiles.contains(file.name())) + .toArray(StoreFileMetadata[]::new); + + final MultiChunkTransfer transfer = segmentFileTransferHandler + .createTransfer(shard.store(), storeFileMetadata, () -> 0, sendFileStep); + resources.add(transfer); + transfer.start(); + + sendFileStep.whenComplete(r -> { + try { + future.onResponse(new GetSegmentFilesResponse(List.of(storeFileMetadata))); + } finally { + IOUtils.close(resources); + } + }, onFailure); + } catch (Exception e) { + IOUtils.closeWhileHandlingException(releaseResources, () -> future.onFailure(e)); + } + } + + /** + * Cancels the recovery and interrupts all eligible threads. + */ + public void cancel(String reason) { + cancellableThreads.cancel(reason); + } + + CopyState getCopyState() { + return copyState; + } + + public boolean isReplicating() { + return isReplicating.get(); + } +} diff --git a/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationSourceService.java b/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationSourceService.java index 9f70120dedd6c..d428459884f97 100644 --- a/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationSourceService.java +++ b/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationSourceService.java @@ -10,11 +10,20 @@ import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; -import org.opensearch.index.IndexService; +import org.opensearch.action.support.ChannelActionListener; +import org.opensearch.cluster.ClusterChangedEvent; +import org.opensearch.cluster.ClusterStateListener; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.service.ClusterService; +import org.opensearch.common.Nullable; +import org.opensearch.common.component.AbstractLifecycleComponent; +import org.opensearch.common.settings.Settings; +import org.opensearch.index.shard.IndexEventListener; import org.opensearch.index.shard.IndexShard; import org.opensearch.index.shard.ShardId; import org.opensearch.indices.IndicesService; -import org.opensearch.indices.replication.checkpoint.ReplicationCheckpoint; +import org.opensearch.indices.recovery.RecoverySettings; +import org.opensearch.indices.recovery.RetryableTransportClient; import org.opensearch.indices.replication.common.CopyState; import org.opensearch.tasks.Task; import org.opensearch.threadpool.ThreadPool; @@ -23,9 +32,7 @@ import org.opensearch.transport.TransportService; import java.io.IOException; -import java.util.Collections; -import java.util.HashMap; -import java.util.Map; +import java.util.concurrent.atomic.AtomicLong; /** * Service class that handles segment replication requests from replica shards. @@ -33,9 +40,12 @@ * * @opensearch.internal */ -public class SegmentReplicationSourceService { +public final class SegmentReplicationSourceService extends AbstractLifecycleComponent implements ClusterStateListener, IndexEventListener { private static final Logger logger = LogManager.getLogger(SegmentReplicationSourceService.class); + private final RecoverySettings recoverySettings; + private final TransportService transportService; + private final IndicesService indicesService; /** * Internal actions used by the segment replication source service on the primary shard @@ -43,20 +53,21 @@ public class SegmentReplicationSourceService { * @opensearch.internal */ public static class Actions { + public static final String GET_CHECKPOINT_INFO = "internal:index/shard/replication/get_checkpoint_info"; public static final String GET_SEGMENT_FILES = "internal:index/shard/replication/get_segment_files"; } - private final Map copyStateMap; - private final TransportService transportService; - private final IndicesService indicesService; + private final OngoingSegmentReplications ongoingSegmentReplications; - // TODO mark this as injected and bind in Node - public SegmentReplicationSourceService(TransportService transportService, IndicesService indicesService) { - copyStateMap = Collections.synchronizedMap(new HashMap<>()); + public SegmentReplicationSourceService( + IndicesService indicesService, + TransportService transportService, + RecoverySettings recoverySettings + ) { this.transportService = transportService; this.indicesService = indicesService; - + this.recoverySettings = recoverySettings; transportService.registerRequestHandler( Actions.GET_CHECKPOINT_INFO, ThreadPool.Names.GENERIC, @@ -69,14 +80,27 @@ public SegmentReplicationSourceService(TransportService transportService, Indice GetSegmentFilesRequest::new, new GetSegmentFilesRequestHandler() ); + this.ongoingSegmentReplications = new OngoingSegmentReplications(indicesService, recoverySettings); } private class CheckpointInfoRequestHandler implements TransportRequestHandler { @Override public void messageReceived(CheckpointInfoRequest request, TransportChannel channel, Task task) throws Exception { - final ReplicationCheckpoint checkpoint = request.getCheckpoint(); - logger.trace("Received request for checkpoint {}", checkpoint); - final CopyState copyState = getCachedCopyState(checkpoint); + final RemoteSegmentFileChunkWriter segmentSegmentFileChunkWriter = new RemoteSegmentFileChunkWriter( + request.getReplicationId(), + recoverySettings, + new RetryableTransportClient( + transportService, + request.getTargetNode(), + recoverySettings.internalActionRetryTimeout(), + logger + ), + request.getCheckpoint().getShardId(), + SegmentReplicationTargetService.Actions.FILE_CHUNK, + new AtomicLong(0), + (throttleTime) -> {} + ); + final CopyState copyState = ongoingSegmentReplications.prepareForReplication(request, segmentSegmentFileChunkWriter); channel.sendResponse( new CheckpointInfoResponse( copyState.getCheckpoint(), @@ -88,73 +112,47 @@ public void messageReceived(CheckpointInfoRequest request, TransportChannel chan } } - class GetSegmentFilesRequestHandler implements TransportRequestHandler { + private class GetSegmentFilesRequestHandler implements TransportRequestHandler { @Override public void messageReceived(GetSegmentFilesRequest request, TransportChannel channel, Task task) throws Exception { - if (isInCopyStateMap(request.getCheckpoint())) { - // TODO send files - } else { - // Return an empty list of files - channel.sendResponse(new GetSegmentFilesResponse(Collections.emptyList())); - } + ongoingSegmentReplications.startSegmentCopy(request, new ChannelActionListener<>(channel, Actions.GET_SEGMENT_FILES, request)); } } - /** - * Operations on the {@link #copyStateMap} member. - */ + @Override + public void clusterChanged(ClusterChangedEvent event) { + if (event.nodesRemoved()) { + for (DiscoveryNode removedNode : event.nodesDelta().removedNodes()) { + ongoingSegmentReplications.cancelReplication(removedNode); + } + } + } - /** - * A synchronized method that checks {@link #copyStateMap} for the given {@link ReplicationCheckpoint} key - * and returns the cached value if one is present. If the key is not present, a {@link CopyState} - * object is constructed and stored in the map before being returned. - */ - private synchronized CopyState getCachedCopyState(ReplicationCheckpoint checkpoint) throws IOException { - if (isInCopyStateMap(checkpoint)) { - final CopyState copyState = fetchFromCopyStateMap(checkpoint); - copyState.incRef(); - return copyState; - } else { - // From the checkpoint's shard ID, fetch the IndexShard - ShardId shardId = checkpoint.getShardId(); - final IndexService indexService = indicesService.indexService(shardId.getIndex()); - final IndexShard indexShard = indexService.getShard(shardId.id()); - // build the CopyState object and cache it before returning - final CopyState copyState = new CopyState(indexShard); - - /** - * Use the checkpoint from the request as the key in the map, rather than - * the checkpoint from the created CopyState. This maximizes cache hits - * if replication targets make a request with an older checkpoint. - * Replication targets are expected to fetch the checkpoint in the response - * CopyState to bring themselves up to date. - */ - addToCopyStateMap(checkpoint, copyState); - return copyState; + @Override + protected void doStart() { + final ClusterService clusterService = indicesService.clusterService(); + if (DiscoveryNode.isDataNode(clusterService.getSettings())) { + clusterService.addListener(this); } } - /** - * Adds the input {@link CopyState} object to {@link #copyStateMap}. - * The key is the CopyState's {@link ReplicationCheckpoint} object. - */ - private void addToCopyStateMap(ReplicationCheckpoint checkpoint, CopyState copyState) { - copyStateMap.putIfAbsent(checkpoint, copyState); + @Override + protected void doStop() { + final ClusterService clusterService = indicesService.clusterService(); + if (DiscoveryNode.isDataNode(clusterService.getSettings())) { + indicesService.clusterService().removeListener(this); + } } - /** - * Given a {@link ReplicationCheckpoint}, return the corresponding - * {@link CopyState} object, if any, from {@link #copyStateMap}. - */ - private CopyState fetchFromCopyStateMap(ReplicationCheckpoint replicationCheckpoint) { - return copyStateMap.get(replicationCheckpoint); + @Override + protected void doClose() throws IOException { + } - /** - * Checks if the {@link #copyStateMap} has the input {@link ReplicationCheckpoint} - * as a key by invoking {@link Map#containsKey(Object)}. - */ - private boolean isInCopyStateMap(ReplicationCheckpoint replicationCheckpoint) { - return copyStateMap.containsKey(replicationCheckpoint); + @Override + public void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard, Settings indexSettings) { + if (indexShard != null) { + ongoingSegmentReplications.cancel(indexShard, "shard is closed"); + } } } diff --git a/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationState.java b/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationState.java index b01016d2a1e62..838c06a4785ef 100644 --- a/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationState.java +++ b/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationState.java @@ -27,7 +27,9 @@ public class SegmentReplicationState implements ReplicationState { public enum Stage { DONE((byte) 0), - INIT((byte) 1); + INIT((byte) 1), + + REPLICATING((byte) 2); private static final Stage[] STAGES = new Stage[Stage.values().length]; @@ -56,29 +58,58 @@ public static Stage fromId(byte id) { } } - public SegmentReplicationState() { - this.stage = Stage.INIT; - } - private Stage stage; + private final ReplicationLuceneIndex index; + private final ReplicationTimer timer; + + public SegmentReplicationState(ReplicationLuceneIndex index) { + stage = Stage.INIT; + this.index = index; + timer = new ReplicationTimer(); + timer.start(); + } @Override public ReplicationLuceneIndex getIndex() { - // TODO - return null; + return index; } @Override public ReplicationTimer getTimer() { - // TODO - return null; + return timer; } public Stage getStage() { return stage; } + protected void validateAndSetStage(Stage expected, Stage next) { + if (stage != expected) { + assert false : "can't move replication to stage [" + next + "]. current stage: [" + stage + "] (expected [" + expected + "])"; + throw new IllegalStateException( + "can't move replication to stage [" + next + "]. current stage: [" + stage + "] (expected [" + expected + "])" + ); + } + stage = next; + } + public void setStage(Stage stage) { - this.stage = stage; + switch (stage) { + case INIT: + this.stage = Stage.INIT; + getIndex().reset(); + break; + case REPLICATING: + validateAndSetStage(Stage.INIT, stage); + getIndex().start(); + break; + case DONE: + validateAndSetStage(Stage.REPLICATING, stage); + getIndex().stop(); + getTimer().stop(); + break; + default: + throw new IllegalArgumentException("unknown SegmentReplicationState.Stage [" + stage + "]"); + } } } diff --git a/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationTarget.java b/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationTarget.java index 7933ea5f0344b..fb68e59f3b2ef 100644 --- a/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationTarget.java +++ b/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationTarget.java @@ -8,18 +8,40 @@ package org.opensearch.indices.replication; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.lucene.index.CorruptIndexException; +import org.apache.lucene.index.IndexFormatTooNewException; +import org.apache.lucene.index.IndexFormatTooOldException; +import org.apache.lucene.index.SegmentInfos; +import org.apache.lucene.store.BufferedChecksumIndexInput; +import org.apache.lucene.store.ByteBuffersDataInput; +import org.apache.lucene.store.ByteBuffersIndexInput; +import org.apache.lucene.store.ChecksumIndexInput; import org.opensearch.OpenSearchException; import org.opensearch.action.ActionListener; +import org.opensearch.action.StepListener; +import org.opensearch.common.UUIDs; import org.opensearch.common.bytes.BytesReference; +import org.opensearch.common.lucene.Lucene; import org.opensearch.common.util.CancellableThreads; import org.opensearch.index.shard.IndexShard; +import org.opensearch.index.store.Store; import org.opensearch.index.store.StoreFileMetadata; +import org.opensearch.indices.recovery.MultiFileWriter; import org.opensearch.indices.replication.checkpoint.ReplicationCheckpoint; +import org.opensearch.indices.replication.common.ReplicationFailedException; +import org.opensearch.indices.replication.common.ReplicationListener; import org.opensearch.indices.replication.common.ReplicationLuceneIndex; -import org.opensearch.indices.replication.common.ReplicationState; import org.opensearch.indices.replication.common.ReplicationTarget; import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.stream.Collectors; /** * Represents the target of a replication event. @@ -31,55 +53,52 @@ public class SegmentReplicationTarget extends ReplicationTarget { private final ReplicationCheckpoint checkpoint; private final SegmentReplicationSource source; private final SegmentReplicationState state; + protected final MultiFileWriter multiFileWriter; public SegmentReplicationTarget( ReplicationCheckpoint checkpoint, IndexShard indexShard, SegmentReplicationSource source, - SegmentReplicationTargetService.SegmentReplicationListener listener + ReplicationListener listener ) { super("replication_target", indexShard, new ReplicationLuceneIndex(), listener); this.checkpoint = checkpoint; this.source = source; - this.state = new SegmentReplicationState(); + this.state = new SegmentReplicationState(stateIndex); + this.multiFileWriter = new MultiFileWriter(indexShard.store(), stateIndex, getPrefix(), logger, this::ensureRefCount); } @Override protected void closeInternal() { - // TODO + try { + multiFileWriter.close(); + } finally { + super.closeInternal(); + } } @Override protected String getPrefix() { - // TODO - return null; + return "replication." + UUIDs.randomBase64UUID() + "."; } @Override protected void onDone() { - this.state.setStage(SegmentReplicationState.Stage.DONE); + state.setStage(SegmentReplicationState.Stage.DONE); } @Override - protected void onCancel(String reason) { - // TODO - } - - @Override - public ReplicationState state() { + public SegmentReplicationState state() { return state; } - @Override - public ReplicationTarget retryCopy() { - // TODO - return null; + public SegmentReplicationTarget retryCopy() { + return new SegmentReplicationTarget(checkpoint, indexShard, source, listener); } @Override public String description() { - // TODO - return null; + return "Segment replication from " + source.toString(); } @Override @@ -102,7 +121,12 @@ public void writeFileChunk( int totalTranslogOps, ActionListener listener ) { - // TODO + try { + multiFileWriter.writeFileChunk(metadata, position, content, lastChunk); + listener.onResponse(null); + } catch (Exception e) { + listener.onFailure(e); + } } /** @@ -110,6 +134,127 @@ public void writeFileChunk( * @param listener {@link ActionListener} listener. */ public void startReplication(ActionListener listener) { - // TODO + state.setStage(SegmentReplicationState.Stage.REPLICATING); + final StepListener checkpointInfoListener = new StepListener<>(); + final StepListener getFilesListener = new StepListener<>(); + final StepListener finalizeListener = new StepListener<>(); + + // Get list of files to copy from this checkpoint. + source.getCheckpointMetadata(getId(), checkpoint, checkpointInfoListener); + + checkpointInfoListener.whenComplete(checkpointInfo -> getFiles(checkpointInfo, getFilesListener), listener::onFailure); + getFilesListener.whenComplete( + response -> finalizeReplication(checkpointInfoListener.result(), finalizeListener), + listener::onFailure + ); + finalizeListener.whenComplete(r -> listener.onResponse(null), listener::onFailure); + } + + private void getFiles(CheckpointInfoResponse checkpointInfo, StepListener getFilesListener) + throws IOException { + final Store.MetadataSnapshot snapshot = checkpointInfo.getSnapshot(); + Store.MetadataSnapshot localMetadata = getMetadataSnapshot(); + final Store.RecoveryDiff diff = snapshot.recoveryDiff(localMetadata); + logger.debug("Replication diff {}", diff); + // Segments are immutable. So if the replica has any segments with the same name that differ from the one in the incoming snapshot + // from + // source that means the local copy of the segment has been corrupted/changed in some way and we throw an IllegalStateException to + // fail the shard + if (diff.different.isEmpty() == false) { + getFilesListener.onFailure( + new IllegalStateException( + new ParameterizedMessage("Shard {} has local copies of segments that differ from the primary", indexShard.shardId()) + .getFormattedMessage() + ) + ); + } + final List filesToFetch = new ArrayList(diff.missing); + + Set storeFiles = new HashSet<>(Arrays.asList(store.directory().listAll())); + final Set pendingDeleteFiles = checkpointInfo.getPendingDeleteFiles() + .stream() + .filter(f -> storeFiles.contains(f.name()) == false) + .collect(Collectors.toSet()); + + filesToFetch.addAll(pendingDeleteFiles); + + for (StoreFileMetadata file : filesToFetch) { + state.getIndex().addFileDetail(file.name(), file.length(), false); + } + if (filesToFetch.isEmpty()) { + getFilesListener.onResponse(new GetSegmentFilesResponse(filesToFetch)); + } else { + source.getSegmentFiles(getId(), checkpointInfo.getCheckpoint(), filesToFetch, store, getFilesListener); + } + } + + private void finalizeReplication(CheckpointInfoResponse checkpointInfoResponse, ActionListener listener) { + ActionListener.completeWith(listener, () -> { + multiFileWriter.renameAllTempFiles(); + final Store store = store(); + store.incRef(); + try { + // Deserialize the new SegmentInfos object sent from the primary. + final ReplicationCheckpoint responseCheckpoint = checkpointInfoResponse.getCheckpoint(); + SegmentInfos infos = SegmentInfos.readCommit( + store.directory(), + toIndexInput(checkpointInfoResponse.getInfosBytes()), + responseCheckpoint.getSegmentsGen() + ); + indexShard.finalizeReplication(infos, responseCheckpoint.getSeqNo()); + store.cleanupAndPreserveLatestCommitPoint("finalize - clean with in memory infos", store.getMetadata(infos)); + } catch (CorruptIndexException | IndexFormatTooNewException | IndexFormatTooOldException ex) { + // this is a fatal exception at this stage. + // this means we transferred files from the remote that have not be checksummed and they are + // broken. We have to clean up this shard entirely, remove all files and bubble it up to the + // source shard since this index might be broken there as well? The Source can handle this and checks + // its content on disk if possible. + try { + try { + store.removeCorruptionMarker(); + } finally { + Lucene.cleanLuceneIndex(store.directory()); // clean up and delete all files + } + } catch (Exception e) { + logger.debug("Failed to clean lucene index", e); + ex.addSuppressed(e); + } + ReplicationFailedException rfe = new ReplicationFailedException( + indexShard.shardId(), + "failed to clean after replication", + ex + ); + fail(rfe, true); + throw rfe; + } catch (Exception ex) { + ReplicationFailedException rfe = new ReplicationFailedException( + indexShard.shardId(), + "failed to clean after replication", + ex + ); + fail(rfe, true); + throw rfe; + } finally { + store.decRef(); + } + return null; + }); + } + + /** + * This method formats our byte[] containing the primary's SegmentInfos into lucene's {@link ChecksumIndexInput} that can be + * passed to SegmentInfos.readCommit + */ + private ChecksumIndexInput toIndexInput(byte[] input) { + return new BufferedChecksumIndexInput( + new ByteBuffersIndexInput(new ByteBuffersDataInput(Arrays.asList(ByteBuffer.wrap(input))), "SegmentInfos") + ); + } + + Store.MetadataSnapshot getMetadataSnapshot() throws IOException { + if (indexShard.getSegmentInfosSnapshot() == null) { + return Store.MetadataSnapshot.EMPTY; + } + return store.getMetadata(indexShard.getSegmentInfosSnapshot().get()); } } diff --git a/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationTargetService.java b/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationTargetService.java index 1c6053a72a4c5..c44b27911bb7a 100644 --- a/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationTargetService.java +++ b/server/src/main/java/org/opensearch/indices/replication/SegmentReplicationTargetService.java @@ -38,7 +38,7 @@ * * @opensearch.internal */ -public final class SegmentReplicationTargetService implements IndexEventListener { +public class SegmentReplicationTargetService implements IndexEventListener { private static final Logger logger = LogManager.getLogger(SegmentReplicationTargetService.class); @@ -84,6 +84,39 @@ public void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexSh } } + /** + * Invoked when a new checkpoint is received from a primary shard. + * It checks if a new checkpoint should be processed or not and starts replication if needed. + * @param receivedCheckpoint received checkpoint that is checked for processing + * @param replicaShard replica shard on which checkpoint is received + */ + public synchronized void onNewCheckpoint(final ReplicationCheckpoint receivedCheckpoint, final IndexShard replicaShard) { + if (onGoingReplications.isShardReplicating(replicaShard.shardId())) { + logger.trace( + () -> new ParameterizedMessage( + "Ignoring new replication checkpoint - shard is currently replicating to checkpoint {}", + replicaShard.getLatestReplicationCheckpoint() + ) + ); + return; + } + if (replicaShard.shouldProcessCheckpoint(receivedCheckpoint)) { + startReplication(receivedCheckpoint, replicaShard, new SegmentReplicationListener() { + @Override + public void onReplicationDone(SegmentReplicationState state) {} + + @Override + public void onReplicationFailure(SegmentReplicationState state, OpenSearchException e, boolean sendShardFailure) { + if (sendShardFailure == true) { + logger.error("replication failure", e); + replicaShard.failShard("replication failure", e); + } + } + }); + + } + } + public void startReplication( final ReplicationCheckpoint checkpoint, final IndexShard indexShard, diff --git a/server/src/main/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointAction.java b/server/src/main/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointAction.java index b74a69971ebd5..8093b6aee88f9 100644 --- a/server/src/main/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointAction.java +++ b/server/src/main/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointAction.java @@ -28,6 +28,7 @@ import org.opensearch.index.shard.IndexShard; import org.opensearch.index.shard.IndexShardClosedException; import org.opensearch.indices.IndicesService; +import org.opensearch.indices.replication.SegmentReplicationTargetService; import org.opensearch.node.NodeClosedException; import org.opensearch.tasks.Task; import org.opensearch.threadpool.ThreadPool; @@ -52,6 +53,8 @@ public class PublishCheckpointAction extends TransportReplicationAction< public static final String ACTION_NAME = "indices:admin/publishCheckpoint"; protected static Logger logger = LogManager.getLogger(PublishCheckpointAction.class); + private final SegmentReplicationTargetService replicationService; + @Inject public PublishCheckpointAction( Settings settings, @@ -60,7 +63,8 @@ public PublishCheckpointAction( IndicesService indicesService, ThreadPool threadPool, ShardStateAction shardStateAction, - ActionFilters actionFilters + ActionFilters actionFilters, + SegmentReplicationTargetService targetService ) { super( settings, @@ -75,6 +79,7 @@ public PublishCheckpointAction( PublishCheckpointRequest::new, ThreadPool.Names.REFRESH ); + this.replicationService = targetService; } @Override @@ -165,7 +170,7 @@ protected void shardOperationOnReplica(PublishCheckpointRequest request, IndexSh ActionListener.completeWith(listener, () -> { logger.trace("Checkpoint received on replica {}", request); if (request.getCheckpoint().getShardId().equals(replica.shardId())) { - replica.onNewCheckpoint(request); + replicationService.onNewCheckpoint(request.getCheckpoint(), replica); } return new ReplicaResult(); }); diff --git a/server/src/main/java/org/opensearch/indices/replication/checkpoint/ReplicationCheckpoint.java b/server/src/main/java/org/opensearch/indices/replication/checkpoint/ReplicationCheckpoint.java index 98ab9cc4c1708..f84a65206190b 100644 --- a/server/src/main/java/org/opensearch/indices/replication/checkpoint/ReplicationCheckpoint.java +++ b/server/src/main/java/org/opensearch/indices/replication/checkpoint/ReplicationCheckpoint.java @@ -115,7 +115,7 @@ public int hashCode() { * Checks if other is aheadof current replication point by comparing segmentInfosVersion. Returns true for null */ public boolean isAheadOf(@Nullable ReplicationCheckpoint other) { - return other == null || segmentInfosVersion > other.getSegmentInfosVersion(); + return other == null || segmentInfosVersion > other.getSegmentInfosVersion() || primaryTerm > other.getPrimaryTerm(); } @Override diff --git a/server/src/main/java/org/opensearch/indices/replication/checkpoint/SegmentReplicationCheckpointPublisher.java b/server/src/main/java/org/opensearch/indices/replication/checkpoint/SegmentReplicationCheckpointPublisher.java index 2b09901a947fe..6be524cea140e 100644 --- a/server/src/main/java/org/opensearch/indices/replication/checkpoint/SegmentReplicationCheckpointPublisher.java +++ b/server/src/main/java/org/opensearch/indices/replication/checkpoint/SegmentReplicationCheckpointPublisher.java @@ -22,6 +22,7 @@ public class SegmentReplicationCheckpointPublisher { private final PublishAction publishAction; + // This Component is behind feature flag so we are manually binding this in IndicesModule. @Inject public SegmentReplicationCheckpointPublisher(PublishCheckpointAction publishAction) { this(publishAction::publish); diff --git a/server/src/main/java/org/opensearch/indices/replication/common/CopyState.java b/server/src/main/java/org/opensearch/indices/replication/common/CopyState.java index 250df3481435a..c0e0b4dee2b3f 100644 --- a/server/src/main/java/org/opensearch/indices/replication/common/CopyState.java +++ b/server/src/main/java/org/opensearch/indices/replication/common/CopyState.java @@ -33,14 +33,20 @@ public class CopyState extends AbstractRefCounted { private final GatedCloseable segmentInfosRef; + /** ReplicationCheckpoint requested */ + private final ReplicationCheckpoint requestedReplicationCheckpoint; + /** Actual ReplicationCheckpoint returned by the shard */ private final ReplicationCheckpoint replicationCheckpoint; private final Store.MetadataSnapshot metadataSnapshot; private final HashSet pendingDeleteFiles; private final byte[] infosBytes; private GatedCloseable commitRef; + private final IndexShard shard; - public CopyState(IndexShard shard) throws IOException { + public CopyState(ReplicationCheckpoint requestedReplicationCheckpoint, IndexShard shard) throws IOException { super("CopyState-" + shard.shardId()); + this.requestedReplicationCheckpoint = requestedReplicationCheckpoint; + this.shard = shard; this.segmentInfosRef = shard.getSegmentInfosSnapshot(); SegmentInfos segmentInfos = this.segmentInfosRef.get(); this.metadataSnapshot = shard.store().getMetadata(segmentInfos); @@ -100,4 +106,12 @@ public byte[] getInfosBytes() { public Set getPendingDeleteFiles() { return pendingDeleteFiles; } + + public IndexShard getShard() { + return shard; + } + + public ReplicationCheckpoint getRequestedReplicationCheckpoint() { + return requestedReplicationCheckpoint; + } } diff --git a/server/src/main/java/org/opensearch/indices/replication/common/ReplicationCollection.java b/server/src/main/java/org/opensearch/indices/replication/common/ReplicationCollection.java index b8295f0685a7f..d648ca6041ff8 100644 --- a/server/src/main/java/org/opensearch/indices/replication/common/ReplicationCollection.java +++ b/server/src/main/java/org/opensearch/indices/replication/common/ReplicationCollection.java @@ -235,6 +235,16 @@ public boolean cancelForShard(ShardId shardId, String reason) { return cancelled; } + /** + * check if a shard is currently replicating + * + * @param shardId shardId for which to check if replicating + * @return true if shard is currently replicating + */ + public boolean isShardReplicating(ShardId shardId) { + return onGoingTargetEvents.values().stream().anyMatch(t -> t.indexShard.shardId().equals(shardId)); + } + /** * a reference to {@link ReplicationTarget}, which implements {@link AutoCloseable}. closing the reference * causes {@link ReplicationTarget#decRef()} to be called. This makes sure that the underlying resources diff --git a/server/src/main/java/org/opensearch/indices/replication/common/ReplicationFailedException.java b/server/src/main/java/org/opensearch/indices/replication/common/ReplicationFailedException.java new file mode 100644 index 0000000000000..afdd0ce466f9b --- /dev/null +++ b/server/src/main/java/org/opensearch/indices/replication/common/ReplicationFailedException.java @@ -0,0 +1,41 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.replication.common; + +import org.opensearch.OpenSearchException; +import org.opensearch.common.Nullable; +import org.opensearch.common.io.stream.StreamInput; +import org.opensearch.index.shard.IndexShard; +import org.opensearch.index.shard.ShardId; + +import java.io.IOException; + +/** + * Exception thrown if replication fails + * + * @opensearch.internal + */ +public class ReplicationFailedException extends OpenSearchException { + + public ReplicationFailedException(IndexShard shard, Throwable cause) { + this(shard, null, cause); + } + + public ReplicationFailedException(IndexShard shard, @Nullable String extraInfo, Throwable cause) { + this(shard.shardId(), extraInfo, cause); + } + + public ReplicationFailedException(ShardId shardId, @Nullable String extraInfo, Throwable cause) { + super(shardId + ": Replication failed on " + (extraInfo == null ? "" : " (" + extraInfo + ")"), cause); + } + + public ReplicationFailedException(StreamInput in) throws IOException { + super(in); + } +} diff --git a/server/src/main/java/org/opensearch/indices/replication/common/ReplicationTarget.java b/server/src/main/java/org/opensearch/indices/replication/common/ReplicationTarget.java index f8dc07f122c02..27e23ceafb15e 100644 --- a/server/src/main/java/org/opensearch/indices/replication/common/ReplicationTarget.java +++ b/server/src/main/java/org/opensearch/indices/replication/common/ReplicationTarget.java @@ -23,6 +23,7 @@ import org.opensearch.index.seqno.SequenceNumbers; import org.opensearch.index.shard.IndexShard; import org.opensearch.index.shard.ShardId; +import org.opensearch.index.store.Store; import org.opensearch.index.store.StoreFileMetadata; import org.opensearch.indices.recovery.FileChunkRequest; import org.opensearch.indices.recovery.RecoveryTransportRequest; @@ -50,6 +51,7 @@ public abstract class ReplicationTarget extends AbstractRefCounted { protected final AtomicBoolean finished = new AtomicBoolean(); private final ShardId shardId; protected final IndexShard indexShard; + protected final Store store; protected final ReplicationListener listener; protected final Logger logger; protected final CancellableThreads cancellableThreads; @@ -59,7 +61,9 @@ public abstract class ReplicationTarget extends AbstractRefCounted { protected abstract void onDone(); - protected abstract void onCancel(String reason); + protected void onCancel(String reason) { + cancellableThreads.cancel(reason); + } public abstract ReplicationState state(); @@ -84,9 +88,11 @@ public ReplicationTarget(String name, IndexShard indexShard, ReplicationLuceneIn this.id = ID_GENERATOR.incrementAndGet(); this.stateIndex = stateIndex; this.indexShard = indexShard; + this.store = indexShard.store(); this.shardId = indexShard.shardId(); // make sure the store is not released until we are done. this.cancellableThreads = new CancellableThreads(); + store.incRef(); } public long getId() { @@ -119,6 +125,11 @@ public IndexShard indexShard() { return indexShard; } + public Store store() { + ensureRefCount(); + return store; + } + public ShardId shardId() { return shardId; } @@ -266,4 +277,8 @@ public abstract void writeFileChunk( int totalTranslogOps, ActionListener listener ); + + protected void closeInternal() { + store.decRef(); + } } diff --git a/server/src/main/java/org/opensearch/indices/replication/common/SegmentReplicationTransportRequest.java b/server/src/main/java/org/opensearch/indices/replication/common/SegmentReplicationTransportRequest.java index db8206d131c13..09b14fb1b5333 100644 --- a/server/src/main/java/org/opensearch/indices/replication/common/SegmentReplicationTransportRequest.java +++ b/server/src/main/java/org/opensearch/indices/replication/common/SegmentReplicationTransportRequest.java @@ -46,4 +46,29 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(this.targetAllocationId); targetNode.writeTo(out); } + + public long getReplicationId() { + return replicationId; + } + + public String getTargetAllocationId() { + return targetAllocationId; + } + + public DiscoveryNode getTargetNode() { + return targetNode; + } + + @Override + public String toString() { + return "SegmentReplicationTransportRequest{" + + "replicationId=" + + replicationId + + ", targetAllocationId='" + + targetAllocationId + + '\'' + + ", targetNode=" + + targetNode + + '}'; + } } diff --git a/server/src/main/java/org/opensearch/ingest/IngestService.java b/server/src/main/java/org/opensearch/ingest/IngestService.java index ac740c304d1f9..b8256fe896da4 100644 --- a/server/src/main/java/org/opensearch/ingest/IngestService.java +++ b/server/src/main/java/org/opensearch/ingest/IngestService.java @@ -44,7 +44,7 @@ import org.opensearch.action.index.IndexRequest; import org.opensearch.action.ingest.DeletePipelineRequest; import org.opensearch.action.ingest.PutPipelineRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.cluster.AckedClusterStateUpdateTask; import org.opensearch.cluster.ClusterChangedEvent; diff --git a/server/src/main/java/org/opensearch/node/Node.java b/server/src/main/java/org/opensearch/node/Node.java index 7e205b88e9eb1..4b4fdc974f8cb 100644 --- a/server/src/main/java/org/opensearch/node/Node.java +++ b/server/src/main/java/org/opensearch/node/Node.java @@ -36,7 +36,11 @@ import org.apache.logging.log4j.Logger; import org.apache.lucene.util.Constants; import org.apache.lucene.util.SetOnce; +import org.opensearch.common.util.FeatureFlags; import org.opensearch.index.IndexingPressureService; +import org.opensearch.indices.replication.SegmentReplicationSourceFactory; +import org.opensearch.indices.replication.SegmentReplicationTargetService; +import org.opensearch.indices.replication.SegmentReplicationSourceService; import org.opensearch.watcher.ResourceWatcherService; import org.opensearch.Assertions; import org.opensearch.Build; @@ -219,6 +223,7 @@ import java.util.stream.Stream; import static java.util.stream.Collectors.toList; +import static org.opensearch.common.util.FeatureFlags.REPLICATION_TYPE; import static org.opensearch.index.ShardIndexingPressureSettings.SHARD_INDEXING_PRESSURE_ENABLED_ATTRIBUTE_KEY; /** @@ -932,6 +937,19 @@ protected Node( .toInstance(new PeerRecoverySourceService(transportService, indicesService, recoverySettings)); b.bind(PeerRecoveryTargetService.class) .toInstance(new PeerRecoveryTargetService(threadPool, transportService, recoverySettings, clusterService)); + if (FeatureFlags.isEnabled(REPLICATION_TYPE)) { + b.bind(SegmentReplicationTargetService.class) + .toInstance( + new SegmentReplicationTargetService( + threadPool, + recoverySettings, + transportService, + new SegmentReplicationSourceFactory(transportService, recoverySettings, clusterService) + ) + ); + b.bind(SegmentReplicationSourceService.class) + .toInstance(new SegmentReplicationSourceService(indicesService, transportService, recoverySettings)); + } } b.bind(HttpServerTransport.class).toInstance(httpServerTransport); pluginComponents.stream().forEach(p -> b.bind((Class) p.getClass()).toInstance(p)); diff --git a/server/src/main/java/org/opensearch/persistent/CompletionPersistentTaskAction.java b/server/src/main/java/org/opensearch/persistent/CompletionPersistentTaskAction.java index 1dda269a68491..d036457ccae89 100644 --- a/server/src/main/java/org/opensearch/persistent/CompletionPersistentTaskAction.java +++ b/server/src/main/java/org/opensearch/persistent/CompletionPersistentTaskAction.java @@ -193,7 +193,7 @@ protected ClusterBlockException checkBlock(Request request, ClusterState state) } @Override - protected final void masterOperation( + protected final void clusterManagerOperation( final Request request, ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/persistent/RemovePersistentTaskAction.java b/server/src/main/java/org/opensearch/persistent/RemovePersistentTaskAction.java index 56436d0e1aa0c..d07dcc23056d8 100644 --- a/server/src/main/java/org/opensearch/persistent/RemovePersistentTaskAction.java +++ b/server/src/main/java/org/opensearch/persistent/RemovePersistentTaskAction.java @@ -181,7 +181,7 @@ protected ClusterBlockException checkBlock(Request request, ClusterState state) } @Override - protected final void masterOperation( + protected final void clusterManagerOperation( final Request request, ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/persistent/StartPersistentTaskAction.java b/server/src/main/java/org/opensearch/persistent/StartPersistentTaskAction.java index 6f2f3a565427b..4864cf3c23b50 100644 --- a/server/src/main/java/org/opensearch/persistent/StartPersistentTaskAction.java +++ b/server/src/main/java/org/opensearch/persistent/StartPersistentTaskAction.java @@ -254,7 +254,7 @@ protected ClusterBlockException checkBlock(Request request, ClusterState state) } @Override - protected final void masterOperation( + protected final void clusterManagerOperation( final Request request, ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/persistent/UpdatePersistentTaskStatusAction.java b/server/src/main/java/org/opensearch/persistent/UpdatePersistentTaskStatusAction.java index aee79ac1ba3ea..acbb62373ab60 100644 --- a/server/src/main/java/org/opensearch/persistent/UpdatePersistentTaskStatusAction.java +++ b/server/src/main/java/org/opensearch/persistent/UpdatePersistentTaskStatusAction.java @@ -213,7 +213,7 @@ protected ClusterBlockException checkBlock(Request request, ClusterState state) } @Override - protected final void masterOperation( + protected final void clusterManagerOperation( final Request request, final ClusterState state, final ActionListener listener diff --git a/server/src/main/java/org/opensearch/rest/BaseRestHandler.java b/server/src/main/java/org/opensearch/rest/BaseRestHandler.java index d42350b34b1c0..1f1ee91db4722 100644 --- a/server/src/main/java/org/opensearch/rest/BaseRestHandler.java +++ b/server/src/main/java/org/opensearch/rest/BaseRestHandler.java @@ -229,7 +229,7 @@ public static void parseDeprecatedMasterTimeoutParameter( if (request.hasParam("cluster_manager_timeout")) { throw new OpenSearchParseException(DUPLICATE_PARAMETER_ERROR_MESSAGE); } - mnr.masterNodeTimeout(request.paramAsTime("master_timeout", mnr.masterNodeTimeout())); + mnr.clusterManagerNodeTimeout(request.paramAsTime("master_timeout", mnr.clusterManagerNodeTimeout())); } } diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCleanupRepositoryAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCleanupRepositoryAction.java index a5be2fcf0f2b7..83887d7b2c1b6 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCleanupRepositoryAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCleanupRepositoryAction.java @@ -69,8 +69,8 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { CleanupRepositoryRequest cleanupRepositoryRequest = cleanupRepositoryRequest(request.param("repository")); cleanupRepositoryRequest.timeout(request.paramAsTime("timeout", cleanupRepositoryRequest.timeout())); - cleanupRepositoryRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", cleanupRepositoryRequest.masterNodeTimeout()) + cleanupRepositoryRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", cleanupRepositoryRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(cleanupRepositoryRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().cleanupRepository(cleanupRepositoryRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCloneSnapshotAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCloneSnapshotAction.java index 905b8bf780575..e0384602066cf 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCloneSnapshotAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCloneSnapshotAction.java @@ -76,7 +76,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC request.param("target_snapshot"), XContentMapValues.nodeStringArrayValue(source.getOrDefault("indices", Collections.emptyList())) ); - cloneSnapshotRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", cloneSnapshotRequest.masterNodeTimeout())); + cloneSnapshotRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", cloneSnapshotRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(cloneSnapshotRequest, request, deprecationLogger, getName()); cloneSnapshotRequest.indicesOptions(IndicesOptions.fromMap(source, cloneSnapshotRequest.indicesOptions())); return channel -> client.admin().cluster().cloneSnapshot(cloneSnapshotRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterGetSettingsAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterGetSettingsAction.java index 9abb3cd42cea1..6933895e98f35 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterGetSettingsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterGetSettingsAction.java @@ -92,7 +92,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC ClusterStateRequest clusterStateRequest = Requests.clusterStateRequest().routingTable(false).nodes(false); final boolean renderDefaults = request.paramAsBoolean("include_defaults", false); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - clusterStateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterStateRequest.masterNodeTimeout())); + clusterStateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterStateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterStateRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().state(clusterStateRequest, new RestBuilderListener(channel) { @Override diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterHealthAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterHealthAction.java index efaad5a10e348..cf85e5aa4e902 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterHealthAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterHealthAction.java @@ -89,7 +89,9 @@ public static ClusterHealthRequest fromRequest(final RestRequest request) { final ClusterHealthRequest clusterHealthRequest = clusterHealthRequest(Strings.splitStringByCommaToArray(request.param("index"))); clusterHealthRequest.indicesOptions(IndicesOptions.fromRequest(request, clusterHealthRequest.indicesOptions())); clusterHealthRequest.local(request.paramAsBoolean("local", clusterHealthRequest.local())); - clusterHealthRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterHealthRequest.masterNodeTimeout())); + clusterHealthRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterHealthRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterHealthRequest, request, deprecationLogger, "cluster_health"); clusterHealthRequest.timeout(request.paramAsTime("timeout", clusterHealthRequest.timeout())); String waitForStatus = request.param("wait_for_status"); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterRerouteAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterRerouteAction.java index 6a7731019867e..db2d3856e77e5 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterRerouteAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterRerouteAction.java @@ -148,7 +148,9 @@ public static ClusterRerouteRequest createRequest(RestRequest request) throws IO clusterRerouteRequest.explain(request.paramAsBoolean("explain", clusterRerouteRequest.explain())); clusterRerouteRequest.timeout(request.paramAsTime("timeout", clusterRerouteRequest.timeout())); clusterRerouteRequest.setRetryFailed(request.paramAsBoolean("retry_failed", clusterRerouteRequest.isRetryFailed())); - clusterRerouteRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterRerouteRequest.masterNodeTimeout())); + clusterRerouteRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterRerouteRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterRerouteRequest, request, deprecationLogger, "cluster_reroute"); request.applyContentParser(parser -> PARSER.parse(parser, clusterRerouteRequest, null)); return clusterRerouteRequest; diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStateAction.java index 57208d2d048f6..4e8f4e32cfd26 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStateAction.java @@ -109,7 +109,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC final ClusterStateRequest clusterStateRequest = Requests.clusterStateRequest(); clusterStateRequest.indicesOptions(IndicesOptions.fromRequest(request, clusterStateRequest.indicesOptions())); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - clusterStateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterStateRequest.masterNodeTimeout())); + clusterStateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterStateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterStateRequest, request, deprecationLogger, getName()); if (request.hasParam("wait_for_metadata_version")) { clusterStateRequest.waitForMetadataVersion(request.paramAsLong("wait_for_metadata_version", 0)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterUpdateSettingsAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterUpdateSettingsAction.java index 2918238a1963f..80ca662a0abf2 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterUpdateSettingsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterUpdateSettingsAction.java @@ -76,8 +76,8 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final ClusterUpdateSettingsRequest clusterUpdateSettingsRequest = Requests.clusterUpdateSettingsRequest(); clusterUpdateSettingsRequest.timeout(request.paramAsTime("timeout", clusterUpdateSettingsRequest.timeout())); - clusterUpdateSettingsRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", clusterUpdateSettingsRequest.masterNodeTimeout()) + clusterUpdateSettingsRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterUpdateSettingsRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(clusterUpdateSettingsRequest, request, deprecationLogger, getName()); Map source; diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCreateSnapshotAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCreateSnapshotAction.java index 11147f219f362..4a5fbd2404fd9 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCreateSnapshotAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestCreateSnapshotAction.java @@ -73,7 +73,9 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { CreateSnapshotRequest createSnapshotRequest = createSnapshotRequest(request.param("repository"), request.param("snapshot")); request.applyContentParser(p -> createSnapshotRequest.source(p.mapOrdered())); - createSnapshotRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", createSnapshotRequest.masterNodeTimeout())); + createSnapshotRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", createSnapshotRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(createSnapshotRequest, request, deprecationLogger, getName()); createSnapshotRequest.waitForCompletion(request.paramAsBoolean("wait_for_completion", false)); return channel -> client.admin().cluster().createSnapshot(createSnapshotRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteRepositoryAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteRepositoryAction.java index 7e04a5e62d7e4..3497761cdadea 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteRepositoryAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteRepositoryAction.java @@ -69,8 +69,8 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { DeleteRepositoryRequest deleteRepositoryRequest = deleteRepositoryRequest(request.param("repository")); deleteRepositoryRequest.timeout(request.paramAsTime("timeout", deleteRepositoryRequest.timeout())); - deleteRepositoryRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", deleteRepositoryRequest.masterNodeTimeout()) + deleteRepositoryRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", deleteRepositoryRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(deleteRepositoryRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().deleteRepository(deleteRepositoryRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteSnapshotAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteSnapshotAction.java index 03aa9cf0d4967..4b58e5fa86e46 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteSnapshotAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteSnapshotAction.java @@ -72,7 +72,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC request.param("repository"), Strings.splitStringByCommaToArray(request.param("snapshot")) ); - deleteSnapshotRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", deleteSnapshotRequest.masterNodeTimeout())); + deleteSnapshotRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", deleteSnapshotRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(deleteSnapshotRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().deleteSnapshot(deleteSnapshotRequest, new RestToXContentListener<>(channel)); } diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteStoredScriptAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteStoredScriptAction.java index 03fae6d3b55fc..061ad214ec7e4 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteStoredScriptAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestDeleteStoredScriptAction.java @@ -68,8 +68,8 @@ public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client String id = request.param("id"); DeleteStoredScriptRequest deleteStoredScriptRequest = new DeleteStoredScriptRequest(id); deleteStoredScriptRequest.timeout(request.paramAsTime("timeout", deleteStoredScriptRequest.timeout())); - deleteStoredScriptRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", deleteStoredScriptRequest.masterNodeTimeout()) + deleteStoredScriptRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", deleteStoredScriptRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(deleteStoredScriptRequest, request, deprecationLogger, getName()); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetRepositoriesAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetRepositoriesAction.java index 0eab9c1aaa6b2..c23b01fc89c15 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetRepositoriesAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetRepositoriesAction.java @@ -80,8 +80,8 @@ public List routes() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final String[] repositories = request.paramAsStringArray("repository", Strings.EMPTY_ARRAY); GetRepositoriesRequest getRepositoriesRequest = getRepositoryRequest(repositories); - getRepositoriesRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", getRepositoriesRequest.masterNodeTimeout()) + getRepositoriesRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", getRepositoriesRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(getRepositoriesRequest, request, deprecationLogger, getName()); getRepositoriesRequest.local(request.paramAsBoolean("local", getRepositoriesRequest.local())); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetSnapshotsAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetSnapshotsAction.java index dfaecddcf1d13..1c93ee5473939 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetSnapshotsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetSnapshotsAction.java @@ -74,7 +74,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC GetSnapshotsRequest getSnapshotsRequest = getSnapshotsRequest(repository).snapshots(snapshots); getSnapshotsRequest.ignoreUnavailable(request.paramAsBoolean("ignore_unavailable", getSnapshotsRequest.ignoreUnavailable())); getSnapshotsRequest.verbose(request.paramAsBoolean("verbose", getSnapshotsRequest.verbose())); - getSnapshotsRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", getSnapshotsRequest.masterNodeTimeout())); + getSnapshotsRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", getSnapshotsRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(getSnapshotsRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().getSnapshots(getSnapshotsRequest, new RestToXContentListener<>(channel)); } diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetStoredScriptAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetStoredScriptAction.java index 60302255bb516..8c42513a91fa2 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetStoredScriptAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestGetStoredScriptAction.java @@ -67,7 +67,7 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, NodeClient client) throws IOException { String id = request.param("id"); GetStoredScriptRequest getRequest = new GetStoredScriptRequest(id); - getRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", getRequest.masterNodeTimeout())); + getRequest.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", getRequest.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(getRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().getStoredScript(getRequest, new RestStatusToXContentListener<>(channel)); } diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPendingClusterTasksAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPendingClusterTasksAction.java index 1158058c187c9..6d106d625a2dd 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPendingClusterTasksAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPendingClusterTasksAction.java @@ -67,8 +67,8 @@ public String getName() { @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { PendingClusterTasksRequest pendingClusterTasksRequest = new PendingClusterTasksRequest(); - pendingClusterTasksRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", pendingClusterTasksRequest.masterNodeTimeout()) + pendingClusterTasksRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", pendingClusterTasksRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(pendingClusterTasksRequest, request, deprecationLogger, getName()); pendingClusterTasksRequest.local(request.paramAsBoolean("local", pendingClusterTasksRequest.local())); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPutRepositoryAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPutRepositoryAction.java index e64850c61f941..6f37760bdd1b6 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPutRepositoryAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPutRepositoryAction.java @@ -75,7 +75,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC putRepositoryRequest.source(parser.mapOrdered()); } putRepositoryRequest.verify(request.paramAsBoolean("verify", true)); - putRepositoryRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", putRepositoryRequest.masterNodeTimeout())); + putRepositoryRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", putRepositoryRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(putRepositoryRequest, request, deprecationLogger, getName()); putRepositoryRequest.timeout(request.paramAsTime("timeout", putRepositoryRequest.timeout())); return channel -> client.admin().cluster().putRepository(putRepositoryRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPutStoredScriptAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPutStoredScriptAction.java index f88cc6ede50b7..f17ac0f48e750 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPutStoredScriptAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestPutStoredScriptAction.java @@ -84,7 +84,7 @@ public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client StoredScriptSource source = StoredScriptSource.parse(content, xContentType); PutStoredScriptRequest putRequest = new PutStoredScriptRequest(id, context, content, request.getXContentType(), source); - putRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", putRequest.masterNodeTimeout())); + putRequest.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", putRequest.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(putRequest, request, deprecationLogger, getName()); putRequest.timeout(request.paramAsTime("timeout", putRequest.timeout())); return channel -> client.admin().cluster().putStoredScript(putRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestRestoreSnapshotAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestRestoreSnapshotAction.java index c56fc2993683d..334edb5f83095 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestRestoreSnapshotAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestRestoreSnapshotAction.java @@ -68,8 +68,8 @@ public String getName() { @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { RestoreSnapshotRequest restoreSnapshotRequest = restoreSnapshotRequest(request.param("repository"), request.param("snapshot")); - restoreSnapshotRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", restoreSnapshotRequest.masterNodeTimeout()) + restoreSnapshotRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", restoreSnapshotRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(restoreSnapshotRequest, request, deprecationLogger, getName()); restoreSnapshotRequest.waitForCompletion(request.paramAsBoolean("wait_for_completion", false)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestSnapshotsStatusAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestSnapshotsStatusAction.java index d10703792dcf3..f02e477c51d4d 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestSnapshotsStatusAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestSnapshotsStatusAction.java @@ -83,8 +83,8 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC SnapshotsStatusRequest snapshotsStatusRequest = snapshotsStatusRequest(repository).snapshots(snapshots); snapshotsStatusRequest.ignoreUnavailable(request.paramAsBoolean("ignore_unavailable", snapshotsStatusRequest.ignoreUnavailable())); - snapshotsStatusRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", snapshotsStatusRequest.masterNodeTimeout()) + snapshotsStatusRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", snapshotsStatusRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(snapshotsStatusRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().snapshotsStatus(snapshotsStatusRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestVerifyRepositoryAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestVerifyRepositoryAction.java index c759b5b8b7f13..bf7572168926d 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestVerifyRepositoryAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestVerifyRepositoryAction.java @@ -68,8 +68,8 @@ public String getName() { @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { VerifyRepositoryRequest verifyRepositoryRequest = verifyRepositoryRequest(request.param("repository")); - verifyRepositoryRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", verifyRepositoryRequest.masterNodeTimeout()) + verifyRepositoryRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", verifyRepositoryRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(verifyRepositoryRequest, request, deprecationLogger, getName()); verifyRepositoryRequest.timeout(request.paramAsTime("timeout", verifyRepositoryRequest.timeout())); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/dangling/RestDeleteDanglingIndexAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/dangling/RestDeleteDanglingIndexAction.java index c4de2a6d12d98..0cf0b76a25e23 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/dangling/RestDeleteDanglingIndexAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/dangling/RestDeleteDanglingIndexAction.java @@ -33,7 +33,7 @@ package org.opensearch.rest.action.admin.cluster.dangling; import org.opensearch.action.admin.indices.dangling.delete.DeleteDanglingIndexRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.node.NodeClient; import org.opensearch.common.logging.DeprecationLogger; import org.opensearch.rest.BaseRestHandler; @@ -75,7 +75,7 @@ public RestChannelConsumer prepareRequest(final RestRequest request, NodeClient ); deleteRequest.timeout(request.paramAsTime("timeout", deleteRequest.timeout())); - deleteRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", deleteRequest.masterNodeTimeout())); + deleteRequest.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", deleteRequest.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(deleteRequest, request, deprecationLogger, getName()); return channel -> client.admin() diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/dangling/RestImportDanglingIndexAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/dangling/RestImportDanglingIndexAction.java index 4e27354928e97..f2405afdab834 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/dangling/RestImportDanglingIndexAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/dangling/RestImportDanglingIndexAction.java @@ -40,7 +40,7 @@ import java.util.List; import org.opensearch.action.admin.indices.dangling.import_index.ImportDanglingIndexRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.node.NodeClient; import org.opensearch.common.logging.DeprecationLogger; import org.opensearch.rest.BaseRestHandler; @@ -74,7 +74,7 @@ public RestChannelConsumer prepareRequest(final RestRequest request, NodeClient ); importRequest.timeout(request.paramAsTime("timeout", importRequest.timeout())); - importRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", importRequest.masterNodeTimeout())); + importRequest.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", importRequest.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(importRequest, request, deprecationLogger, getName()); return channel -> client.admin() diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestAddIndexBlockAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestAddIndexBlockAction.java index 3247832a2fa6b..dc8e82321908b 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestAddIndexBlockAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestAddIndexBlockAction.java @@ -73,7 +73,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC IndexMetadata.APIBlock.fromName(request.param("block")), Strings.splitStringByCommaToArray(request.param("index")) ); - addIndexBlockRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", addIndexBlockRequest.masterNodeTimeout())); + addIndexBlockRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", addIndexBlockRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(addIndexBlockRequest, request, deprecationLogger, getName()); addIndexBlockRequest.timeout(request.paramAsTime("timeout", addIndexBlockRequest.timeout())); addIndexBlockRequest.indicesOptions(IndicesOptions.fromRequest(request, addIndexBlockRequest.indicesOptions())); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestCloseIndexAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestCloseIndexAction.java index ceb6e929844e6..93ccbcd4a6f98 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestCloseIndexAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestCloseIndexAction.java @@ -71,7 +71,9 @@ public String getName() { @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { CloseIndexRequest closeIndexRequest = new CloseIndexRequest(Strings.splitStringByCommaToArray(request.param("index"))); - closeIndexRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", closeIndexRequest.masterNodeTimeout())); + closeIndexRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", closeIndexRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(closeIndexRequest, request, deprecationLogger, getName()); closeIndexRequest.timeout(request.paramAsTime("timeout", closeIndexRequest.timeout())); closeIndexRequest.indicesOptions(IndicesOptions.fromRequest(request, closeIndexRequest.indicesOptions())); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestCreateIndexAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestCreateIndexAction.java index 8d78148776597..e7335eee89c5a 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestCreateIndexAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestCreateIndexAction.java @@ -82,7 +82,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC } createIndexRequest.timeout(request.paramAsTime("timeout", createIndexRequest.timeout())); - createIndexRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", createIndexRequest.masterNodeTimeout())); + createIndexRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", createIndexRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(createIndexRequest, request, deprecationLogger, getName()); createIndexRequest.waitForActiveShards(ActiveShardCount.parseString(request.param("wait_for_active_shards"))); return channel -> client.admin().indices().create(createIndexRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteComponentTemplateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteComponentTemplateAction.java index d59dbff968161..f6c7cf88ed5a7 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteComponentTemplateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteComponentTemplateAction.java @@ -68,7 +68,7 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { DeleteComponentTemplateAction.Request deleteReq = new DeleteComponentTemplateAction.Request(request.param("name")); - deleteReq.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", deleteReq.masterNodeTimeout())); + deleteReq.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", deleteReq.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(deleteReq, request, deprecationLogger, getName()); return channel -> client.execute(DeleteComponentTemplateAction.INSTANCE, deleteReq, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteComposableIndexTemplateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteComposableIndexTemplateAction.java index a1a5dd30f3a64..58ae5604ee325 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteComposableIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteComposableIndexTemplateAction.java @@ -68,7 +68,7 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { DeleteComposableIndexTemplateAction.Request deleteReq = new DeleteComposableIndexTemplateAction.Request(request.param("name")); - deleteReq.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", deleteReq.masterNodeTimeout())); + deleteReq.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", deleteReq.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(deleteReq, request, deprecationLogger, getName()); return channel -> client.execute(DeleteComposableIndexTemplateAction.INSTANCE, deleteReq, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteIndexAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteIndexAction.java index 418226f84ba8a..0b14422b5bea8 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteIndexAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteIndexAction.java @@ -71,7 +71,9 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(Strings.splitStringByCommaToArray(request.param("index"))); deleteIndexRequest.timeout(request.paramAsTime("timeout", deleteIndexRequest.timeout())); - deleteIndexRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", deleteIndexRequest.masterNodeTimeout())); + deleteIndexRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", deleteIndexRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(deleteIndexRequest, request, deprecationLogger, getName()); deleteIndexRequest.indicesOptions(IndicesOptions.fromRequest(request, deleteIndexRequest.indicesOptions())); return channel -> client.admin().indices().delete(deleteIndexRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteIndexTemplateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteIndexTemplateAction.java index 9e8d15b261436..7eeb581dcfd20 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestDeleteIndexTemplateAction.java @@ -66,8 +66,8 @@ public String getName() { @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { DeleteIndexTemplateRequest deleteIndexTemplateRequest = new DeleteIndexTemplateRequest(request.param("name")); - deleteIndexTemplateRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", deleteIndexTemplateRequest.masterNodeTimeout()) + deleteIndexTemplateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", deleteIndexTemplateRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(deleteIndexTemplateRequest, request, deprecationLogger, getName()); return channel -> client.admin().indices().deleteTemplate(deleteIndexTemplateRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetComponentTemplateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetComponentTemplateAction.java index 34cb595c09d6f..cc3192e143397 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetComponentTemplateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetComponentTemplateAction.java @@ -80,7 +80,7 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC final GetComponentTemplateAction.Request getRequest = new GetComponentTemplateAction.Request(request.param("name")); getRequest.local(request.paramAsBoolean("local", getRequest.local())); - getRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", getRequest.masterNodeTimeout())); + getRequest.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", getRequest.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(getRequest, request, deprecationLogger, getName()); final boolean implicitAll = getRequest.name() == null; diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetComposableIndexTemplateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetComposableIndexTemplateAction.java index c232d4b713a4c..c4725324b44d1 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetComposableIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetComposableIndexTemplateAction.java @@ -79,7 +79,7 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC final GetComposableIndexTemplateAction.Request getRequest = new GetComposableIndexTemplateAction.Request(request.param("name")); getRequest.local(request.paramAsBoolean("local", getRequest.local())); - getRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", getRequest.masterNodeTimeout())); + getRequest.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", getRequest.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(getRequest, request, deprecationLogger, getName()); final boolean implicitAll = getRequest.name() == null; diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetIndexTemplateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetIndexTemplateAction.java index e938fe78a5b0c..17f6d2e7dd560 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetIndexTemplateAction.java @@ -81,8 +81,8 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC final GetIndexTemplatesRequest getIndexTemplatesRequest = new GetIndexTemplatesRequest(names); getIndexTemplatesRequest.local(request.paramAsBoolean("local", getIndexTemplatesRequest.local())); - getIndexTemplatesRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", getIndexTemplatesRequest.masterNodeTimeout()) + getIndexTemplatesRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", getIndexTemplatesRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(getIndexTemplatesRequest, request, deprecationLogger, getName()); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetIndicesAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetIndicesAction.java index ec87ff43bf65b..d0355516655c9 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetIndicesAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetIndicesAction.java @@ -77,7 +77,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC getIndexRequest.indices(indices); getIndexRequest.indicesOptions(IndicesOptions.fromRequest(request, getIndexRequest.indicesOptions())); getIndexRequest.local(request.paramAsBoolean("local", getIndexRequest.local())); - getIndexRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", getIndexRequest.masterNodeTimeout())); + getIndexRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", getIndexRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(getIndexRequest, request, deprecationLogger, getName()); getIndexRequest.humanReadable(request.paramAsBoolean("human", false)); getIndexRequest.includeDefaults(request.paramAsBoolean("include_defaults", false)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetMappingAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetMappingAction.java index 14e7c52f4b10c..51a104fadead4 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetMappingAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetMappingAction.java @@ -102,17 +102,17 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC final GetMappingsRequest getMappingsRequest = new GetMappingsRequest(); getMappingsRequest.indices(indices); getMappingsRequest.indicesOptions(IndicesOptions.fromRequest(request, getMappingsRequest.indicesOptions())); - TimeValue clusterManagerTimeout = request.paramAsTime("cluster_manager_timeout", getMappingsRequest.masterNodeTimeout()); + TimeValue clusterManagerTimeout = request.paramAsTime("cluster_manager_timeout", getMappingsRequest.clusterManagerNodeTimeout()); // TODO: Remove the if condition and statements inside after removing MASTER_ROLE. if (request.hasParam("master_timeout")) { deprecationLogger.deprecate("get_mapping_master_timeout_parameter", MASTER_TIMEOUT_DEPRECATED_MESSAGE); if (request.hasParam("cluster_manager_timeout")) { throw new OpenSearchParseException(DUPLICATE_PARAMETER_ERROR_MESSAGE); } - clusterManagerTimeout = request.paramAsTime("master_timeout", getMappingsRequest.masterNodeTimeout()); + clusterManagerTimeout = request.paramAsTime("master_timeout", getMappingsRequest.clusterManagerNodeTimeout()); } final TimeValue timeout = clusterManagerTimeout; - getMappingsRequest.masterNodeTimeout(timeout); + getMappingsRequest.clusterManagerNodeTimeout(timeout); getMappingsRequest.local(request.paramAsBoolean("local", getMappingsRequest.local())); return channel -> client.admin().indices().getMappings(getMappingsRequest, new RestActionListener(channel) { diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetSettingsAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetSettingsAction.java index ded098ab27f25..c4316c66588b6 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetSettingsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestGetSettingsAction.java @@ -87,7 +87,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC .includeDefaults(renderDefaults) .names(names); getSettingsRequest.local(request.paramAsBoolean("local", getSettingsRequest.local())); - getSettingsRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", getSettingsRequest.masterNodeTimeout())); + getSettingsRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", getSettingsRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(getSettingsRequest, request, deprecationLogger, getName()); return channel -> client.admin().indices().getSettings(getSettingsRequest, new RestToXContentListener<>(channel)); } diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexDeleteAliasesAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexDeleteAliasesAction.java index e8c30895ae89c..a41dfc67a868c 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexDeleteAliasesAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexDeleteAliasesAction.java @@ -73,7 +73,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC IndicesAliasesRequest indicesAliasesRequest = new IndicesAliasesRequest(); indicesAliasesRequest.timeout(request.paramAsTime("timeout", indicesAliasesRequest.timeout())); indicesAliasesRequest.addAliasAction(AliasActions.remove().indices(indices).aliases(aliases)); - indicesAliasesRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", indicesAliasesRequest.masterNodeTimeout())); + indicesAliasesRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", indicesAliasesRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(indicesAliasesRequest, request, deprecationLogger, getName()); return channel -> client.admin().indices().aliases(indicesAliasesRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexPutAliasAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexPutAliasAction.java index 2c6b3d36048e4..e0d7f4ebe5157 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexPutAliasAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexPutAliasAction.java @@ -132,7 +132,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC IndicesAliasesRequest indicesAliasesRequest = new IndicesAliasesRequest(); indicesAliasesRequest.timeout(request.paramAsTime("timeout", indicesAliasesRequest.timeout())); - indicesAliasesRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", indicesAliasesRequest.masterNodeTimeout())); + indicesAliasesRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", indicesAliasesRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(indicesAliasesRequest, request, deprecationLogger, getName()); IndicesAliasesRequest.AliasActions aliasAction = AliasActions.add().indices(indices).alias(alias); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndicesAliasesAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndicesAliasesAction.java index 691fb316871f4..87ec43b2ca54c 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndicesAliasesAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndicesAliasesAction.java @@ -68,7 +68,9 @@ public List routes() { @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { IndicesAliasesRequest indicesAliasesRequest = new IndicesAliasesRequest(); - indicesAliasesRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", indicesAliasesRequest.masterNodeTimeout())); + indicesAliasesRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", indicesAliasesRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(indicesAliasesRequest, request, deprecationLogger, getName()); indicesAliasesRequest.timeout(request.paramAsTime("timeout", indicesAliasesRequest.timeout())); try (XContentParser parser = request.contentParser()) { diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestOpenIndexAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestOpenIndexAction.java index eda632ed09892..e95d6764c5b98 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestOpenIndexAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestOpenIndexAction.java @@ -72,7 +72,9 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { OpenIndexRequest openIndexRequest = new OpenIndexRequest(Strings.splitStringByCommaToArray(request.param("index"))); openIndexRequest.timeout(request.paramAsTime("timeout", openIndexRequest.timeout())); - openIndexRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", openIndexRequest.masterNodeTimeout())); + openIndexRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", openIndexRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(openIndexRequest, request, deprecationLogger, getName()); openIndexRequest.indicesOptions(IndicesOptions.fromRequest(request, openIndexRequest.indicesOptions())); String waitForActiveShards = request.param("wait_for_active_shards"); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutComponentTemplateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutComponentTemplateAction.java index 12bf30c48d6a0..a26f1d01f416f 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutComponentTemplateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutComponentTemplateAction.java @@ -70,7 +70,7 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { PutComponentTemplateAction.Request putRequest = new PutComponentTemplateAction.Request(request.param("name")); - putRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", putRequest.masterNodeTimeout())); + putRequest.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", putRequest.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(putRequest, request, deprecationLogger, getName()); putRequest.create(request.paramAsBoolean("create", false)); putRequest.cause(request.param("cause", "api")); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutComposableIndexTemplateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutComposableIndexTemplateAction.java index 6c44a2c2eb68d..e66b9acc62a85 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutComposableIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutComposableIndexTemplateAction.java @@ -70,7 +70,7 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { PutComposableIndexTemplateAction.Request putRequest = new PutComposableIndexTemplateAction.Request(request.param("name")); - putRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", putRequest.masterNodeTimeout())); + putRequest.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", putRequest.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(putRequest, request, deprecationLogger, getName()); putRequest.create(request.paramAsBoolean("create", false)); putRequest.cause(request.param("cause", "api")); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutIndexTemplateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutIndexTemplateAction.java index a4184cc1dbd0e..6c499a933a6e2 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutIndexTemplateAction.java @@ -83,7 +83,7 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC putRequest.patterns(Arrays.asList(request.paramAsStringArray("index_patterns", Strings.EMPTY_ARRAY))); } putRequest.order(request.paramAsInt("order", putRequest.order())); - putRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", putRequest.masterNodeTimeout())); + putRequest.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", putRequest.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(putRequest, request, deprecationLogger, getName()); putRequest.create(request.paramAsBoolean("create", false)); putRequest.cause(request.param("cause", "")); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutMappingAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutMappingAction.java index d1c08b977ccf5..8d64e41fc766a 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutMappingAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestPutMappingAction.java @@ -91,7 +91,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC putMappingRequest.source(sourceAsMap); putMappingRequest.timeout(request.paramAsTime("timeout", putMappingRequest.timeout())); - putMappingRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", putMappingRequest.masterNodeTimeout())); + putMappingRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", putMappingRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(putMappingRequest, request, deprecationLogger, getName()); putMappingRequest.indicesOptions(IndicesOptions.fromRequest(request, putMappingRequest.indicesOptions())); putMappingRequest.writeIndexOnly(request.paramAsBoolean("write_index_only", false)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestResizeHandler.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestResizeHandler.java index e2348aa04f07a..1024891d9d44a 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestResizeHandler.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestResizeHandler.java @@ -96,7 +96,7 @@ public final RestChannelConsumer prepareRequest(final RestRequest request, final resizeRequest.setCopySettings(copySettings); request.applyContentParser(resizeRequest::fromXContent); resizeRequest.timeout(request.paramAsTime("timeout", resizeRequest.timeout())); - resizeRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", resizeRequest.masterNodeTimeout())); + resizeRequest.clusterManagerNodeTimeout(request.paramAsTime("cluster_manager_timeout", resizeRequest.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(resizeRequest, request, deprecationLogger, getName()); resizeRequest.setWaitForActiveShards(ActiveShardCount.parseString(request.param("wait_for_active_shards"))); return channel -> client.admin().indices().resizeIndex(resizeRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestRolloverIndexAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestRolloverIndexAction.java index 911e6654fd9b6..617b9a52be1b3 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestRolloverIndexAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestRolloverIndexAction.java @@ -77,7 +77,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC request.applyContentParser(parser -> rolloverIndexRequest.fromXContent(parser)); rolloverIndexRequest.dryRun(request.paramAsBoolean("dry_run", false)); rolloverIndexRequest.timeout(request.paramAsTime("timeout", rolloverIndexRequest.timeout())); - rolloverIndexRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", rolloverIndexRequest.masterNodeTimeout())); + rolloverIndexRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", rolloverIndexRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(rolloverIndexRequest, request, deprecationLogger, getName()); rolloverIndexRequest.getCreateIndexRequest() .waitForActiveShards(ActiveShardCount.parseString(request.param("wait_for_active_shards"))); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestSimulateIndexTemplateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestSimulateIndexTemplateAction.java index 5861a452d7fcf..aa5e29fbf46cd 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestSimulateIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestSimulateIndexTemplateAction.java @@ -69,8 +69,8 @@ public String getName() { @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { SimulateIndexTemplateRequest simulateIndexTemplateRequest = new SimulateIndexTemplateRequest(request.param("name")); - simulateIndexTemplateRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", simulateIndexTemplateRequest.masterNodeTimeout()) + simulateIndexTemplateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", simulateIndexTemplateRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(simulateIndexTemplateRequest, request, deprecationLogger, getName()); if (request.hasContent()) { diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestSimulateTemplateAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestSimulateTemplateAction.java index 3ebdc91033851..0cccfbbcf38d7 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestSimulateTemplateAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestSimulateTemplateAction.java @@ -79,7 +79,9 @@ public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client simulateRequest.indexTemplateRequest(indexTemplateRequest); } - simulateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", simulateRequest.masterNodeTimeout())); + simulateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", simulateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(simulateRequest, request, deprecationLogger, getName()); return channel -> client.execute(SimulateTemplateAction.INSTANCE, simulateRequest, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestUpdateSettingsAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestUpdateSettingsAction.java index a47a0943fa53f..e70732ff69717 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestUpdateSettingsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestUpdateSettingsAction.java @@ -75,7 +75,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC UpdateSettingsRequest updateSettingsRequest = updateSettingsRequest(Strings.splitStringByCommaToArray(request.param("index"))); updateSettingsRequest.timeout(request.paramAsTime("timeout", updateSettingsRequest.timeout())); updateSettingsRequest.setPreserveExisting(request.paramAsBoolean("preserve_existing", updateSettingsRequest.isPreserveExisting())); - updateSettingsRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", updateSettingsRequest.masterNodeTimeout())); + updateSettingsRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", updateSettingsRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(updateSettingsRequest, request, deprecationLogger, getName()); updateSettingsRequest.indicesOptions(IndicesOptions.fromRequest(request, updateSettingsRequest.indicesOptions())); updateSettingsRequest.fromXContent(request.contentParser()); diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestAllocationAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestAllocationAction.java index b9c414e320adc..60cbb0f366f37 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestAllocationAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestAllocationAction.java @@ -87,7 +87,9 @@ public RestChannelConsumer doCatRequest(final RestRequest request, final NodeCli final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().routingTable(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - clusterStateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterStateRequest.masterNodeTimeout())); + clusterStateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterStateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterStateRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestIndicesAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestIndicesAction.java index bdcfb5b577547..a8cdff5775478 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestIndicesAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestIndicesAction.java @@ -78,7 +78,7 @@ import static java.util.Arrays.asList; import static java.util.Collections.unmodifiableList; -import static org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest.DEFAULT_MASTER_NODE_TIMEOUT; +import static org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest.DEFAULT_CLUSTER_MANAGER_NODE_TIMEOUT; import static org.opensearch.rest.RestRequest.Method.GET; /** @@ -121,14 +121,14 @@ public RestChannelConsumer doCatRequest(final RestRequest request, final NodeCli final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); final IndicesOptions indicesOptions = IndicesOptions.fromRequest(request, IndicesOptions.strictExpand()); final boolean local = request.paramAsBoolean("local", false); - TimeValue clusterManagerTimeout = request.paramAsTime("cluster_manager_timeout", DEFAULT_MASTER_NODE_TIMEOUT); + TimeValue clusterManagerTimeout = request.paramAsTime("cluster_manager_timeout", DEFAULT_CLUSTER_MANAGER_NODE_TIMEOUT); // Remove the if condition and statements inside after removing MASTER_ROLE. if (request.hasParam("master_timeout")) { deprecationLogger.deprecate("cat_indices_master_timeout_parameter", MASTER_TIMEOUT_DEPRECATED_MESSAGE); if (request.hasParam("cluster_manager_timeout")) { throw new OpenSearchParseException(DUPLICATE_PARAMETER_ERROR_MESSAGE); } - clusterManagerTimeout = request.paramAsTime("master_timeout", DEFAULT_MASTER_NODE_TIMEOUT); + clusterManagerTimeout = request.paramAsTime("master_timeout", DEFAULT_CLUSTER_MANAGER_NODE_TIMEOUT); } final TimeValue clusterManagerNodeTimeout = clusterManagerTimeout; final boolean includeUnloadedSegments = request.paramAsBoolean("include_unloaded_segments", false); @@ -224,7 +224,7 @@ private void sendGetSettingsRequest( request.indices(indices); request.indicesOptions(indicesOptions); request.local(local); - request.masterNodeTimeout(clusterManagerNodeTimeout); + request.clusterManagerNodeTimeout(clusterManagerNodeTimeout); request.names(IndexSettings.INDEX_SEARCH_THROTTLED.getKey()); client.admin().indices().getSettings(request, listener); @@ -243,7 +243,7 @@ private void sendClusterStateRequest( request.indices(indices); request.indicesOptions(indicesOptions); request.local(local); - request.masterNodeTimeout(clusterManagerNodeTimeout); + request.clusterManagerNodeTimeout(clusterManagerNodeTimeout); client.admin().cluster().state(request, listener); } @@ -261,7 +261,7 @@ private void sendClusterHealthRequest( request.indices(indices); request.indicesOptions(indicesOptions); request.local(local); - request.masterNodeTimeout(clusterManagerNodeTimeout); + request.clusterManagerNodeTimeout(clusterManagerNodeTimeout); client.admin().cluster().health(request, listener); } diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestMasterAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestMasterAction.java index 671777d643ce3..c43e8c6098618 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestMasterAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestMasterAction.java @@ -78,7 +78,9 @@ public RestChannelConsumer doCatRequest(final RestRequest request, final NodeCli final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().nodes(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - clusterStateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterStateRequest.masterNodeTimeout())); + clusterStateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterStateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterStateRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().state(clusterStateRequest, new RestResponseListener(channel) { diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestNodeAttrsAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestNodeAttrsAction.java index e856c15843620..7b84b3f655522 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestNodeAttrsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestNodeAttrsAction.java @@ -84,7 +84,9 @@ public RestChannelConsumer doCatRequest(final RestRequest request, final NodeCli final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().nodes(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - clusterStateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterStateRequest.masterNodeTimeout())); + clusterStateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterStateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterStateRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java index aaa0413dc4c5f..e0cc1e6af6467 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java @@ -115,7 +115,9 @@ public RestChannelConsumer doCatRequest(final RestRequest request, final NodeCli deprecationLogger.deprecate("cat_nodes_local_parameter", LOCAL_DEPRECATED_MESSAGE); } clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - clusterStateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterStateRequest.masterNodeTimeout())); + clusterStateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterStateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterStateRequest, request, deprecationLogger, getName()); final boolean fullId = request.paramAsBoolean("full_id", false); return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestPendingClusterTasksAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestPendingClusterTasksAction.java index 8ab324f8e3708..472d04e5a1679 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestPendingClusterTasksAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestPendingClusterTasksAction.java @@ -74,8 +74,8 @@ protected void documentation(StringBuilder sb) { @Override public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { PendingClusterTasksRequest pendingClusterTasksRequest = new PendingClusterTasksRequest(); - pendingClusterTasksRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", pendingClusterTasksRequest.masterNodeTimeout()) + pendingClusterTasksRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", pendingClusterTasksRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(pendingClusterTasksRequest, request, deprecationLogger, getName()); pendingClusterTasksRequest.local(request.paramAsBoolean("local", pendingClusterTasksRequest.local())); diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestPluginsAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestPluginsAction.java index 0b2536a188c8c..2f3794cd1b9f9 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestPluginsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestPluginsAction.java @@ -83,7 +83,9 @@ public RestChannelConsumer doCatRequest(final RestRequest request, final NodeCli final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().nodes(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - clusterStateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterStateRequest.masterNodeTimeout())); + clusterStateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterStateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterStateRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestRepositoriesAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestRepositoriesAction.java index fb6ec9a035cd2..19079cd9975ba 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestRepositoriesAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestRepositoriesAction.java @@ -65,8 +65,8 @@ public List routes() { public RestChannelConsumer doCatRequest(RestRequest request, NodeClient client) { GetRepositoriesRequest getRepositoriesRequest = new GetRepositoriesRequest(); getRepositoriesRequest.local(request.paramAsBoolean("local", getRepositoriesRequest.local())); - getRepositoriesRequest.masterNodeTimeout( - request.paramAsTime("cluster_manager_timeout", getRepositoriesRequest.masterNodeTimeout()) + getRepositoriesRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", getRepositoriesRequest.clusterManagerNodeTimeout()) ); parseDeprecatedMasterTimeoutParameter(getRepositoriesRequest, request, deprecationLogger, getName()); diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestSegmentsAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestSegmentsAction.java index 72b159cb554ee..1892a4f06bbf9 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestSegmentsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestSegmentsAction.java @@ -87,7 +87,9 @@ public RestChannelConsumer doCatRequest(final RestRequest request, final NodeCli final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - clusterStateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterStateRequest.masterNodeTimeout())); + clusterStateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterStateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterStateRequest, request, deprecationLogger, getName()); clusterStateRequest.clear().nodes(true).routingTable(true).indices(indices); diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestShardsAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestShardsAction.java index d20b2fb0401e5..6bf24951fe6c9 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestShardsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestShardsAction.java @@ -109,7 +109,9 @@ public RestChannelConsumer doCatRequest(final RestRequest request, final NodeCli final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - clusterStateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterStateRequest.masterNodeTimeout())); + clusterStateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterStateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterStateRequest, request, deprecationLogger, getName()); clusterStateRequest.clear().nodes(true).routingTable(true).indices(indices); return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestSnapshotAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestSnapshotAction.java index 86bb8d983b55f..f05a84d9b2aa2 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestSnapshotAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestSnapshotAction.java @@ -80,7 +80,9 @@ public RestChannelConsumer doCatRequest(final RestRequest request, NodeClient cl getSnapshotsRequest.ignoreUnavailable(request.paramAsBoolean("ignore_unavailable", getSnapshotsRequest.ignoreUnavailable())); - getSnapshotsRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", getSnapshotsRequest.masterNodeTimeout())); + getSnapshotsRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", getSnapshotsRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(getSnapshotsRequest, request, deprecationLogger, getName()); return channel -> client.admin() diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestTemplatesAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestTemplatesAction.java index 5c7310df1b6e9..ec454304b6964 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestTemplatesAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestTemplatesAction.java @@ -83,7 +83,9 @@ public RestChannelConsumer doCatRequest(final RestRequest request, NodeClient cl final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().metadata(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - clusterStateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterStateRequest.masterNodeTimeout())); + clusterStateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterStateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterStateRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().state(clusterStateRequest, new RestResponseListener(channel) { diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestThreadPoolAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestThreadPoolAction.java index bf95be3889a60..652bc448144e2 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestThreadPoolAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestThreadPoolAction.java @@ -97,7 +97,9 @@ public RestChannelConsumer doCatRequest(final RestRequest request, final NodeCli final ClusterStateRequest clusterStateRequest = new ClusterStateRequest(); clusterStateRequest.clear().nodes(true); clusterStateRequest.local(request.paramAsBoolean("local", clusterStateRequest.local())); - clusterStateRequest.masterNodeTimeout(request.paramAsTime("cluster_manager_timeout", clusterStateRequest.masterNodeTimeout())); + clusterStateRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", clusterStateRequest.clusterManagerNodeTimeout()) + ); parseDeprecatedMasterTimeoutParameter(clusterStateRequest, request, deprecationLogger, getName()); return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener(channel) { diff --git a/server/src/main/java/org/opensearch/rest/action/ingest/RestDeletePipelineAction.java b/server/src/main/java/org/opensearch/rest/action/ingest/RestDeletePipelineAction.java index 0248110c4676d..7e9d0dae99427 100644 --- a/server/src/main/java/org/opensearch/rest/action/ingest/RestDeletePipelineAction.java +++ b/server/src/main/java/org/opensearch/rest/action/ingest/RestDeletePipelineAction.java @@ -66,7 +66,7 @@ public String getName() { @Override public RestChannelConsumer prepareRequest(RestRequest restRequest, NodeClient client) throws IOException { DeletePipelineRequest request = new DeletePipelineRequest(restRequest.param("id")); - request.masterNodeTimeout(restRequest.paramAsTime("cluster_manager_timeout", request.masterNodeTimeout())); + request.clusterManagerNodeTimeout(restRequest.paramAsTime("cluster_manager_timeout", request.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(request, restRequest, deprecationLogger, getName()); request.timeout(restRequest.paramAsTime("timeout", request.timeout())); return channel -> client.admin().cluster().deletePipeline(request, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/rest/action/ingest/RestGetPipelineAction.java b/server/src/main/java/org/opensearch/rest/action/ingest/RestGetPipelineAction.java index e9335f4ce36d1..239f2c07e5610 100644 --- a/server/src/main/java/org/opensearch/rest/action/ingest/RestGetPipelineAction.java +++ b/server/src/main/java/org/opensearch/rest/action/ingest/RestGetPipelineAction.java @@ -69,7 +69,7 @@ public String getName() { @Override public RestChannelConsumer prepareRequest(RestRequest restRequest, NodeClient client) throws IOException { GetPipelineRequest request = new GetPipelineRequest(Strings.splitStringByCommaToArray(restRequest.param("id"))); - request.masterNodeTimeout(restRequest.paramAsTime("cluster_manager_timeout", request.masterNodeTimeout())); + request.clusterManagerNodeTimeout(restRequest.paramAsTime("cluster_manager_timeout", request.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(request, restRequest, deprecationLogger, getName()); return channel -> client.admin().cluster().getPipeline(request, new RestStatusToXContentListener<>(channel)); } diff --git a/server/src/main/java/org/opensearch/rest/action/ingest/RestPutPipelineAction.java b/server/src/main/java/org/opensearch/rest/action/ingest/RestPutPipelineAction.java index a94dd18dbf8d9..40b2db4bafc45 100644 --- a/server/src/main/java/org/opensearch/rest/action/ingest/RestPutPipelineAction.java +++ b/server/src/main/java/org/opensearch/rest/action/ingest/RestPutPipelineAction.java @@ -71,7 +71,7 @@ public String getName() { public RestChannelConsumer prepareRequest(RestRequest restRequest, NodeClient client) throws IOException { Tuple sourceTuple = restRequest.contentOrSourceParam(); PutPipelineRequest request = new PutPipelineRequest(restRequest.param("id"), sourceTuple.v2(), sourceTuple.v1()); - request.masterNodeTimeout(restRequest.paramAsTime("cluster_manager_timeout", request.masterNodeTimeout())); + request.clusterManagerNodeTimeout(restRequest.paramAsTime("cluster_manager_timeout", request.clusterManagerNodeTimeout())); parseDeprecatedMasterTimeoutParameter(request, restRequest, deprecationLogger, getName()); request.timeout(restRequest.paramAsTime("timeout", request.timeout())); return channel -> client.admin().cluster().putPipeline(request, new RestToXContentListener<>(channel)); diff --git a/server/src/main/java/org/opensearch/script/ScriptService.java b/server/src/main/java/org/opensearch/script/ScriptService.java index a643a31ed4123..303fc5ccbcf88 100644 --- a/server/src/main/java/org/opensearch/script/ScriptService.java +++ b/server/src/main/java/org/opensearch/script/ScriptService.java @@ -39,7 +39,7 @@ import org.opensearch.action.admin.cluster.storedscripts.DeleteStoredScriptRequest; import org.opensearch.action.admin.cluster.storedscripts.GetStoredScriptRequest; import org.opensearch.action.admin.cluster.storedscripts.PutStoredScriptRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.AckedClusterStateUpdateTask; import org.opensearch.cluster.ClusterChangedEvent; import org.opensearch.cluster.ClusterState; diff --git a/server/src/main/java/org/opensearch/snapshots/RestoreService.java b/server/src/main/java/org/opensearch/snapshots/RestoreService.java index cc4b8e556a3c7..15cf0654ad67e 100644 --- a/server/src/main/java/org/opensearch/snapshots/RestoreService.java +++ b/server/src/main/java/org/opensearch/snapshots/RestoreService.java @@ -708,7 +708,7 @@ public void onFailure(String source, Exception e) { @Override public TimeValue timeout() { - return request.masterNodeTimeout(); + return request.clusterManagerNodeTimeout(); } @Override diff --git a/server/src/main/java/org/opensearch/snapshots/SnapshotsService.java b/server/src/main/java/org/opensearch/snapshots/SnapshotsService.java index 37e9c6d51abd0..771945a8246d3 100644 --- a/server/src/main/java/org/opensearch/snapshots/SnapshotsService.java +++ b/server/src/main/java/org/opensearch/snapshots/SnapshotsService.java @@ -377,7 +377,7 @@ public void onFailure(final Exception e) { @Override public TimeValue timeout() { - return request.masterNodeTimeout(); + return request.clusterManagerNodeTimeout(); } }); } @@ -541,7 +541,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, final Cl @Override public TimeValue timeout() { - return request.masterNodeTimeout(); + return request.clusterManagerNodeTimeout(); } }, "create_snapshot [" + snapshotName + ']', listener::onFailure); } @@ -663,7 +663,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, final Cl @Override public TimeValue timeout() { - return request.masterNodeTimeout(); + return request.clusterManagerNodeTimeout(); } }, "clone_snapshot [" + request.source() + "][" + snapshotName + ']', listener::onFailure); } @@ -2321,7 +2321,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS @Override public TimeValue timeout() { - return request.masterNodeTimeout(); + return request.clusterManagerNodeTimeout(); } }, "delete snapshot", listener::onFailure); } @@ -3654,7 +3654,7 @@ protected UpdateIndexShardSnapshotStatusResponse read(StreamInput in) throws IOE } @Override - protected void masterOperation( + protected void clusterManagerOperation( UpdateIndexShardSnapshotStatusRequest request, ClusterState state, ActionListener listener diff --git a/server/src/main/java/org/opensearch/snapshots/UpdateIndexShardSnapshotStatusRequest.java b/server/src/main/java/org/opensearch/snapshots/UpdateIndexShardSnapshotStatusRequest.java index db7dcf3cc5c75..90120076c967b 100644 --- a/server/src/main/java/org/opensearch/snapshots/UpdateIndexShardSnapshotStatusRequest.java +++ b/server/src/main/java/org/opensearch/snapshots/UpdateIndexShardSnapshotStatusRequest.java @@ -64,7 +64,7 @@ public UpdateIndexShardSnapshotStatusRequest(Snapshot snapshot, ShardId shardId, this.shardId = shardId; this.status = status; // By default, we keep trying to post snapshot status messages to avoid snapshot processes getting stuck. - this.masterNodeTimeout = TimeValue.timeValueNanos(Long.MAX_VALUE); + this.clusterManagerNodeTimeout = TimeValue.timeValueNanos(Long.MAX_VALUE); } @Override diff --git a/server/src/test/java/org/opensearch/ExceptionSerializationTests.java b/server/src/test/java/org/opensearch/ExceptionSerializationTests.java index 888e855176fe6..2046834fa9585 100644 --- a/server/src/test/java/org/opensearch/ExceptionSerializationTests.java +++ b/server/src/test/java/org/opensearch/ExceptionSerializationTests.java @@ -84,6 +84,7 @@ import org.opensearch.indices.InvalidIndexTemplateException; import org.opensearch.indices.recovery.PeerRecoveryNotFound; import org.opensearch.indices.recovery.RecoverFilesRecoveryException; +import org.opensearch.indices.replication.common.ReplicationFailedException; import org.opensearch.ingest.IngestProcessorException; import org.opensearch.cluster.coordination.NodeHealthCheckFailureException; import org.opensearch.repositories.RepositoryException; @@ -849,6 +850,7 @@ public void testIds() { ids.put(158, PeerRecoveryNotFound.class); ids.put(159, NodeHealthCheckFailureException.class); ids.put(160, NoSeedNodeLeftException.class); + ids.put(161, ReplicationFailedException.class); Map, Integer> reverse = new HashMap<>(); for (Map.Entry> entry : ids.entrySet()) { diff --git a/server/src/test/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequestTests.java b/server/src/test/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequestTests.java index 5dc1adf4e1352..2d474ab8c4d16 100644 --- a/server/src/test/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequestTests.java +++ b/server/src/test/java/org/opensearch/action/admin/cluster/reroute/ClusterRerouteRequestTests.java @@ -32,7 +32,7 @@ package org.opensearch.action.admin.cluster.reroute; -import org.opensearch.action.support.clustermanager.AcknowledgedRequest; +import org.opensearch.action.support.master.AcknowledgedRequest; import org.opensearch.action.support.clustermanager.ClusterManagerNodeRequest; import org.opensearch.cluster.routing.allocation.command.AllocateEmptyPrimaryAllocationCommand; import org.opensearch.cluster.routing.allocation.command.AllocateReplicaAllocationCommand; @@ -132,7 +132,7 @@ public void testEqualsAndHashCode() { request.getCommands().commands().toArray(new AllocationCommand[0]) ); copy.dryRun(request.dryRun()).explain(request.explain()).timeout(request.timeout()).setRetryFailed(request.isRetryFailed()); - copy.masterNodeTimeout(request.masterNodeTimeout()); + copy.clusterManagerNodeTimeout(request.clusterManagerNodeTimeout()); assertEquals(request, copy); assertEquals(copy, request); // Commutative assertEquals(request.hashCode(), copy.hashCode()); @@ -162,10 +162,10 @@ public void testEqualsAndHashCode() { assertEquals(request.hashCode(), copy.hashCode()); // Changing clusterManagerNodeTimeout makes requests not equal - copy.masterNodeTimeout(timeValueMillis(request.masterNodeTimeout().millis() + 1)); + copy.clusterManagerNodeTimeout(timeValueMillis(request.clusterManagerNodeTimeout().millis() + 1)); assertNotEquals(request, copy); assertNotEquals(request.hashCode(), copy.hashCode()); - copy.masterNodeTimeout(request.masterNodeTimeout()); + copy.clusterManagerNodeTimeout(request.clusterManagerNodeTimeout()); assertEquals(request, copy); assertEquals(request.hashCode(), copy.hashCode()); @@ -231,8 +231,9 @@ private RestRequest toRestRequest(ClusterRerouteRequest original) throws IOExcep if (original.isRetryFailed() || randomBoolean()) { params.put("retry_failed", Boolean.toString(original.isRetryFailed())); } - if (false == original.masterNodeTimeout().equals(ClusterManagerNodeRequest.DEFAULT_MASTER_NODE_TIMEOUT) || randomBoolean()) { - params.put("cluster_manager_timeout", original.masterNodeTimeout().toString()); + if (false == original.clusterManagerNodeTimeout().equals(ClusterManagerNodeRequest.DEFAULT_CLUSTER_MANAGER_NODE_TIMEOUT) + || randomBoolean()) { + params.put("cluster_manager_timeout", original.clusterManagerNodeTimeout().toString()); } if (original.getCommands() != null) { hasBody = true; diff --git a/server/src/test/java/org/opensearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestTests.java b/server/src/test/java/org/opensearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestTests.java index a629f49c1f791..b2aa4c29841de 100644 --- a/server/src/test/java/org/opensearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestTests.java +++ b/server/src/test/java/org/opensearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestTests.java @@ -120,7 +120,7 @@ public void testToXContent() throws IOException { } if (randomBoolean()) { - original.masterNodeTimeout("60s"); + original.clusterManagerNodeTimeout("60s"); } XContentBuilder builder = original.toXContent(XContentFactory.jsonBuilder(), new MapParams(Collections.emptyMap())); @@ -129,7 +129,7 @@ public void testToXContent() throws IOException { Map map = parser.mapOrdered(); CreateSnapshotRequest processed = new CreateSnapshotRequest((String) map.get("repository"), (String) map.get("snapshot")); processed.waitForCompletion(original.waitForCompletion()); - processed.masterNodeTimeout(original.masterNodeTimeout()); + processed.clusterManagerNodeTimeout(original.clusterManagerNodeTimeout()); processed.source(map); assertEquals(original, processed); diff --git a/server/src/test/java/org/opensearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestTests.java b/server/src/test/java/org/opensearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestTests.java index 65c0bc5ae0639..a292dbd915adf 100644 --- a/server/src/test/java/org/opensearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestTests.java +++ b/server/src/test/java/org/opensearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestTests.java @@ -105,7 +105,7 @@ private RestoreSnapshotRequest randomState(RestoreSnapshotRequest instance) { instance.waitForCompletion(randomBoolean()); if (randomBoolean()) { - instance.masterNodeTimeout(randomTimeValue()); + instance.clusterManagerNodeTimeout(randomTimeValue()); } if (randomBoolean()) { @@ -145,7 +145,7 @@ public void testSource() throws IOException { // properties are restored from the original (in the actual REST action this is restored from the // REST path and request parameters). RestoreSnapshotRequest processed = new RestoreSnapshotRequest(original.repository(), original.snapshot()); - processed.masterNodeTimeout(original.masterNodeTimeout()); + processed.clusterManagerNodeTimeout(original.clusterManagerNodeTimeout()); processed.waitForCompletion(original.waitForCompletion()); processed.source(map); diff --git a/server/src/test/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequestTests.java b/server/src/test/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequestTests.java index 26ba4dde946b3..485358ff21d1c 100644 --- a/server/src/test/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequestTests.java +++ b/server/src/test/java/org/opensearch/action/admin/indices/alias/IndicesAliasesRequestTests.java @@ -73,7 +73,7 @@ private IndicesAliasesRequest createTestInstance() { } if (randomBoolean()) { - request.masterNodeTimeout(randomTimeValue()); + request.clusterManagerNodeTimeout(randomTimeValue()); } for (int i = 0; i < numItems; i++) { request.addAliasAction(randomAliasAction()); diff --git a/server/src/test/java/org/opensearch/action/admin/indices/close/CloseIndexRequestTests.java b/server/src/test/java/org/opensearch/action/admin/indices/close/CloseIndexRequestTests.java index fee026fcdcc9e..4a90a23cbd2f0 100644 --- a/server/src/test/java/org/opensearch/action/admin/indices/close/CloseIndexRequestTests.java +++ b/server/src/test/java/org/opensearch/action/admin/indices/close/CloseIndexRequestTests.java @@ -54,7 +54,7 @@ public void testSerialization() throws Exception { deserializedRequest = new CloseIndexRequest(in); } assertEquals(request.timeout(), deserializedRequest.timeout()); - assertEquals(request.masterNodeTimeout(), deserializedRequest.masterNodeTimeout()); + assertEquals(request.clusterManagerNodeTimeout(), deserializedRequest.clusterManagerNodeTimeout()); assertEquals(request.indicesOptions(), deserializedRequest.indicesOptions()); assertEquals(request.getParentTask(), deserializedRequest.getParentTask()); assertEquals(request.waitForActiveShards(), deserializedRequest.waitForActiveShards()); @@ -72,7 +72,7 @@ public void testBwcSerialization() throws Exception { try (StreamInput in = out.bytes().streamInput()) { in.setVersion(out.getVersion()); assertEquals(request.getParentTask(), TaskId.readFromStream(in)); - assertEquals(request.masterNodeTimeout(), in.readTimeValue()); + assertEquals(request.clusterManagerNodeTimeout(), in.readTimeValue()); assertEquals(request.timeout(), in.readTimeValue()); assertArrayEquals(request.indices(), in.readStringArray()); // indices options are not equivalent when sent to an older version and re-read due @@ -96,7 +96,7 @@ public void testBwcSerialization() throws Exception { try (BytesStreamOutput out = new BytesStreamOutput()) { out.setVersion(version); sample.getParentTask().writeTo(out); - out.writeTimeValue(sample.masterNodeTimeout()); + out.writeTimeValue(sample.clusterManagerNodeTimeout()); out.writeTimeValue(sample.timeout()); out.writeStringArray(sample.indices()); sample.indicesOptions().writeIndicesOptions(out); @@ -110,7 +110,7 @@ public void testBwcSerialization() throws Exception { deserializedRequest = new CloseIndexRequest(in); } assertEquals(sample.getParentTask(), deserializedRequest.getParentTask()); - assertEquals(sample.masterNodeTimeout(), deserializedRequest.masterNodeTimeout()); + assertEquals(sample.clusterManagerNodeTimeout(), deserializedRequest.clusterManagerNodeTimeout()); assertEquals(sample.timeout(), deserializedRequest.timeout()); assertArrayEquals(sample.indices(), deserializedRequest.indices()); // indices options are not equivalent when sent to an older version and re-read due @@ -140,7 +140,7 @@ private CloseIndexRequest randomRequest() { request.timeout(randomPositiveTimeValue()); } if (randomBoolean()) { - request.masterNodeTimeout(randomPositiveTimeValue()); + request.clusterManagerNodeTimeout(randomPositiveTimeValue()); } if (randomBoolean()) { request.setParentTask(randomAlphaOfLength(5), randomNonNegativeLong()); diff --git a/server/src/test/java/org/opensearch/action/admin/indices/get/GetIndexActionTests.java b/server/src/test/java/org/opensearch/action/admin/indices/get/GetIndexActionTests.java index 5648f14fa69dd..001efb32c2988 100644 --- a/server/src/test/java/org/opensearch/action/admin/indices/get/GetIndexActionTests.java +++ b/server/src/test/java/org/opensearch/action/admin/indices/get/GetIndexActionTests.java @@ -145,14 +145,14 @@ class TestTransportGetIndexAction extends TransportGetIndexAction { } @Override - protected void doMasterOperation( + protected void doClusterManagerOperation( GetIndexRequest request, String[] concreteIndices, ClusterState state, ActionListener listener ) { ClusterState stateWithIndex = ClusterStateCreationUtils.state(indexName, 1, 1); - super.doMasterOperation(request, concreteIndices, stateWithIndex, listener); + super.doClusterManagerOperation(request, concreteIndices, stateWithIndex, listener); } } diff --git a/server/src/test/java/org/opensearch/action/admin/indices/rollover/TransportRolloverActionTests.java b/server/src/test/java/org/opensearch/action/admin/indices/rollover/TransportRolloverActionTests.java index 8ef8de81041e3..b206c2e19a65b 100644 --- a/server/src/test/java/org/opensearch/action/admin/indices/rollover/TransportRolloverActionTests.java +++ b/server/src/test/java/org/opensearch/action/admin/indices/rollover/TransportRolloverActionTests.java @@ -298,7 +298,7 @@ public void testConditionEvaluationWhenAliasToWriteAndReadIndicesConsidersOnlyPr RolloverRequest rolloverRequest = new RolloverRequest("logs-alias", "logs-index-000003"); rolloverRequest.addMaxIndexDocsCondition(500L); rolloverRequest.dryRun(true); - transportRolloverAction.masterOperation(mock(Task.class), rolloverRequest, stateBefore, future); + transportRolloverAction.clusterManagerOperation(mock(Task.class), rolloverRequest, stateBefore, future); RolloverResponse response = future.actionGet(); assertThat(response.getOldIndex(), equalTo("logs-index-000002")); @@ -314,7 +314,7 @@ public void testConditionEvaluationWhenAliasToWriteAndReadIndicesConsidersOnlyPr rolloverRequest = new RolloverRequest("logs-alias", "logs-index-000003"); rolloverRequest.addMaxIndexDocsCondition(300L); rolloverRequest.dryRun(true); - transportRolloverAction.masterOperation(mock(Task.class), rolloverRequest, stateBefore, future); + transportRolloverAction.clusterManagerOperation(mock(Task.class), rolloverRequest, stateBefore, future); response = future.actionGet(); assertThat(response.getOldIndex(), equalTo("logs-index-000002")); diff --git a/server/src/test/java/org/opensearch/action/admin/indices/settings/get/GetSettingsActionTests.java b/server/src/test/java/org/opensearch/action/admin/indices/settings/get/GetSettingsActionTests.java index cefb59c40949c..dfbc1dccb9d6a 100644 --- a/server/src/test/java/org/opensearch/action/admin/indices/settings/get/GetSettingsActionTests.java +++ b/server/src/test/java/org/opensearch/action/admin/indices/settings/get/GetSettingsActionTests.java @@ -84,9 +84,13 @@ class TestTransportGetSettingsAction extends TransportGetSettingsAction { } @Override - protected void masterOperation(GetSettingsRequest request, ClusterState state, ActionListener listener) { + protected void clusterManagerOperation( + GetSettingsRequest request, + ClusterState state, + ActionListener listener + ) { ClusterState stateWithIndex = ClusterStateCreationUtils.state(indexName, 1, 1); - super.masterOperation(request, stateWithIndex, listener); + super.clusterManagerOperation(request, stateWithIndex, listener); } } diff --git a/server/src/test/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequestSerializationTests.java b/server/src/test/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequestSerializationTests.java index 5b7f5b82bfa6c..1072c183e164a 100644 --- a/server/src/test/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequestSerializationTests.java +++ b/server/src/test/java/org/opensearch/action/admin/indices/settings/put/UpdateSettingsRequestSerializationTests.java @@ -56,7 +56,9 @@ protected UpdateSettingsRequest mutateInstance(UpdateSettingsRequest request) { UpdateSettingsRequest mutation = copyRequest(request); List mutators = new ArrayList<>(); Supplier timeValueSupplier = () -> TimeValue.parseTimeValue(OpenSearchTestCase.randomTimeValue(), "_setting"); - mutators.add(() -> mutation.masterNodeTimeout(randomValueOtherThan(request.masterNodeTimeout(), timeValueSupplier))); + mutators.add( + () -> mutation.clusterManagerNodeTimeout(randomValueOtherThan(request.clusterManagerNodeTimeout(), timeValueSupplier)) + ); mutators.add(() -> mutation.timeout(randomValueOtherThan(request.timeout(), timeValueSupplier))); mutators.add(() -> mutation.settings(mutateSettings(request.settings()))); mutators.add(() -> mutation.indices(mutateIndices(request.indices()))); @@ -87,7 +89,7 @@ public static UpdateSettingsRequest createTestItem() { UpdateSettingsRequest request = randomBoolean() ? new UpdateSettingsRequest(randomSettings(0, 2)) : new UpdateSettingsRequest(randomSettings(0, 2), randomIndicesNames(0, 2)); - request.masterNodeTimeout(randomTimeValue()); + request.clusterManagerNodeTimeout(randomTimeValue()); request.timeout(randomTimeValue()); request.indicesOptions(IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean())); request.setPreserveExisting(randomBoolean()); @@ -96,7 +98,7 @@ public static UpdateSettingsRequest createTestItem() { private static UpdateSettingsRequest copyRequest(UpdateSettingsRequest request) { UpdateSettingsRequest result = new UpdateSettingsRequest(request.settings(), request.indices()); - result.masterNodeTimeout(request.masterNodeTimeout()); + result.clusterManagerNodeTimeout(request.clusterManagerNodeTimeout()); result.timeout(request.timeout()); result.indicesOptions(request.indicesOptions()); result.setPreserveExisting(request.isPreserveExisting()); diff --git a/server/src/test/java/org/opensearch/action/search/AbstractSearchAsyncActionTests.java b/server/src/test/java/org/opensearch/action/search/AbstractSearchAsyncActionTests.java index f4b45b9c36f96..b44b59b8a4ad5 100644 --- a/server/src/test/java/org/opensearch/action/search/AbstractSearchAsyncActionTests.java +++ b/server/src/test/java/org/opensearch/action/search/AbstractSearchAsyncActionTests.java @@ -32,6 +32,8 @@ package org.opensearch.action.search; +import org.junit.After; +import org.junit.Before; import org.opensearch.action.ActionListener; import org.opensearch.action.OriginalIndices; import org.opensearch.action.support.IndicesOptions; @@ -43,25 +45,34 @@ import org.opensearch.index.Index; import org.opensearch.index.query.MatchAllQueryBuilder; import org.opensearch.index.shard.ShardId; +import org.opensearch.index.shard.ShardNotFoundException; import org.opensearch.search.SearchPhaseResult; import org.opensearch.search.SearchShardTarget; import org.opensearch.search.internal.AliasFilter; import org.opensearch.search.internal.InternalSearchResponse; import org.opensearch.search.internal.ShardSearchContextId; import org.opensearch.search.internal.ShardSearchRequest; +import org.opensearch.search.query.QuerySearchResult; import org.opensearch.test.OpenSearchTestCase; import org.opensearch.transport.Transport; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collections; import java.util.HashSet; import java.util.List; import java.util.Set; +import java.util.UUID; import java.util.concurrent.CopyOnWriteArraySet; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; import java.util.function.BiFunction; +import java.util.stream.IntStream; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThanOrEqualTo; @@ -71,6 +82,22 @@ public class AbstractSearchAsyncActionTests extends OpenSearchTestCase { private final List> resolvedNodes = new ArrayList<>(); private final Set releasedContexts = new CopyOnWriteArraySet<>(); + private ExecutorService executor; + + @Before + @Override + public void setUp() throws Exception { + super.setUp(); + executor = Executors.newFixedThreadPool(1); + } + + @After + @Override + public void tearDown() throws Exception { + super.tearDown(); + executor.shutdown(); + assertTrue(executor.awaitTermination(1, TimeUnit.SECONDS)); + } private AbstractSearchAsyncAction createAction( SearchRequest request, @@ -78,6 +105,26 @@ private AbstractSearchAsyncAction createAction( ActionListener listener, final boolean controlled, final AtomicLong expected + ) { + return createAction( + request, + results, + listener, + controlled, + false, + expected, + new SearchShardIterator(null, null, Collections.emptyList(), null) + ); + } + + private AbstractSearchAsyncAction createAction( + SearchRequest request, + ArraySearchPhaseResults results, + ActionListener listener, + final boolean controlled, + final boolean failExecutePhaseOnShard, + final AtomicLong expected, + final SearchShardIterator... shards ) { final Runnable runnable; final TransportSearchAction.SearchTimeProvider timeProvider; @@ -105,10 +152,10 @@ private AbstractSearchAsyncAction createAction( Collections.singletonMap("foo", new AliasFilter(new MatchAllQueryBuilder())), Collections.singletonMap("foo", 2.0f), Collections.singletonMap("name", Sets.newHashSet("bar", "baz")), - null, + executor, request, listener, - new GroupShardsIterator<>(Collections.singletonList(new SearchShardIterator(null, null, Collections.emptyList(), null))), + new GroupShardsIterator<>(Arrays.asList(shards)), timeProvider, ClusterState.EMPTY_STATE, null, @@ -126,7 +173,13 @@ protected void executePhaseOnShard( final SearchShardIterator shardIt, final SearchShardTarget shard, final SearchActionListener listener - ) {} + ) { + if (failExecutePhaseOnShard) { + listener.onFailure(new ShardNotFoundException(shardIt.shardId())); + } else { + listener.onResponse(new QuerySearchResult()); + } + } @Override long buildTookInMillis() { @@ -328,6 +381,102 @@ private static ArraySearchPhaseResults phaseResults( return phaseResults; } + public void testOnShardFailurePhaseDoneFailure() throws InterruptedException { + final Index index = new Index("test", UUID.randomUUID().toString()); + final CountDownLatch latch = new CountDownLatch(1); + final AtomicBoolean fail = new AtomicBoolean(true); + + final SearchShardIterator[] shards = IntStream.range(0, 5 + randomInt(10)) + .mapToObj(i -> new SearchShardIterator(null, new ShardId(index, i), List.of("n1", "n2", "n3"), null, null, null)) + .toArray(SearchShardIterator[]::new); + + SearchRequest searchRequest = new SearchRequest().allowPartialSearchResults(true); + searchRequest.setMaxConcurrentShardRequests(1); + + final ArraySearchPhaseResults queryResult = new ArraySearchPhaseResults<>(shards.length); + AbstractSearchAsyncAction action = createAction( + searchRequest, + queryResult, + new ActionListener() { + @Override + public void onResponse(SearchResponse response) { + + } + + @Override + public void onFailure(Exception e) { + if (fail.compareAndExchange(true, false)) { + try { + throw new RuntimeException("Simulated exception"); + } finally { + executor.submit(() -> latch.countDown()); + } + } + } + }, + false, + true, + new AtomicLong(), + shards + ); + action.run(); + assertTrue(latch.await(1, TimeUnit.SECONDS)); + + InternalSearchResponse internalSearchResponse = InternalSearchResponse.empty(); + SearchResponse searchResponse = action.buildSearchResponse(internalSearchResponse, action.buildShardFailures(), null, null); + assertSame(searchResponse.getAggregations(), internalSearchResponse.aggregations()); + assertSame(searchResponse.getSuggest(), internalSearchResponse.suggest()); + assertSame(searchResponse.getProfileResults(), internalSearchResponse.profile()); + assertSame(searchResponse.getHits(), internalSearchResponse.hits()); + assertThat(searchResponse.getSuccessfulShards(), equalTo(0)); + } + + public void testOnShardSuccessPhaseDoneFailure() throws InterruptedException { + final Index index = new Index("test", UUID.randomUUID().toString()); + final CountDownLatch latch = new CountDownLatch(1); + final AtomicBoolean fail = new AtomicBoolean(true); + + final SearchShardIterator[] shards = IntStream.range(0, 5 + randomInt(10)) + .mapToObj(i -> new SearchShardIterator(null, new ShardId(index, i), List.of("n1", "n2", "n3"), null, null, null)) + .toArray(SearchShardIterator[]::new); + + SearchRequest searchRequest = new SearchRequest().allowPartialSearchResults(true); + searchRequest.setMaxConcurrentShardRequests(1); + + final ArraySearchPhaseResults queryResult = new ArraySearchPhaseResults<>(shards.length); + AbstractSearchAsyncAction action = createAction( + searchRequest, + queryResult, + new ActionListener() { + @Override + public void onResponse(SearchResponse response) { + if (fail.compareAndExchange(true, false)) { + throw new RuntimeException("Simulated exception"); + } + } + + @Override + public void onFailure(Exception e) { + executor.submit(() -> latch.countDown()); + } + }, + false, + false, + new AtomicLong(), + shards + ); + action.run(); + assertTrue(latch.await(1, TimeUnit.SECONDS)); + + InternalSearchResponse internalSearchResponse = InternalSearchResponse.empty(); + SearchResponse searchResponse = action.buildSearchResponse(internalSearchResponse, action.buildShardFailures(), null, null); + assertSame(searchResponse.getAggregations(), internalSearchResponse.aggregations()); + assertSame(searchResponse.getSuggest(), internalSearchResponse.suggest()); + assertSame(searchResponse.getProfileResults(), internalSearchResponse.profile()); + assertSame(searchResponse.getHits(), internalSearchResponse.hits()); + assertThat(searchResponse.getSuccessfulShards(), equalTo(shards.length)); + } + private static final class PhaseResult extends SearchPhaseResult { PhaseResult(ShardSearchContextId contextId) { this.contextId = contextId; diff --git a/server/src/test/java/org/opensearch/action/support/clustermanager/ShardsAcknowledgedResponseTests.java b/server/src/test/java/org/opensearch/action/support/clustermanager/ShardsAcknowledgedResponseTests.java index 9adef2732083d..6bf854182d5d6 100644 --- a/server/src/test/java/org/opensearch/action/support/clustermanager/ShardsAcknowledgedResponseTests.java +++ b/server/src/test/java/org/opensearch/action/support/clustermanager/ShardsAcknowledgedResponseTests.java @@ -32,6 +32,7 @@ package org.opensearch.action.support.clustermanager; import org.opensearch.Version; +import org.opensearch.action.support.master.ShardsAcknowledgedResponse; import org.opensearch.common.io.stream.NamedWriteableRegistry; import org.opensearch.common.io.stream.StreamInput; import org.opensearch.common.io.stream.StreamOutput; diff --git a/server/src/test/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeActionTests.java b/server/src/test/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeActionTests.java index 879a2f8ea953f..e9b53c68cb26f 100644 --- a/server/src/test/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeActionTests.java +++ b/server/src/test/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeActionTests.java @@ -229,7 +229,7 @@ protected Response read(StreamInput in) throws IOException { } @Override - protected void masterOperation(Request request, ClusterState state, ActionListener listener) throws Exception { + protected void clusterManagerOperation(Request request, ClusterState state, ActionListener listener) throws Exception { listener.onResponse(new Response()); // default implementation, overridden in specific tests } @@ -252,7 +252,7 @@ public void testLocalOperationWithoutBlocks() throws ExecutionException, Interru new Action("internal:testAction", transportService, clusterService, threadPool) { @Override - protected void masterOperation(Task task, Request request, ClusterState state, ActionListener listener) { + protected void clusterManagerOperation(Task task, Request request, ClusterState state, ActionListener listener) { if (clusterManagerOperationFailure) { listener.onFailure(exception); } else { @@ -278,7 +278,7 @@ public void testLocalOperationWithBlocks() throws ExecutionException, Interrupte final boolean retryableBlock = randomBoolean(); final boolean unblockBeforeTimeout = randomBoolean(); - Request request = new Request().masterNodeTimeout(TimeValue.timeValueSeconds(unblockBeforeTimeout ? 60 : 0)); + Request request = new Request().clusterManagerNodeTimeout(TimeValue.timeValueSeconds(unblockBeforeTimeout ? 60 : 0)); PlainActionFuture listener = new PlainActionFuture<>(); ClusterBlock block = new ClusterBlock(1, "", retryableBlock, true, false, randomFrom(RestStatus.values()), ClusterBlockLevel.ALL); @@ -324,7 +324,7 @@ protected ClusterBlockException checkBlock(Request request, ClusterState state) public void testCheckBlockThrowsException() throws InterruptedException { boolean throwExceptionOnRetry = randomBoolean(); - Request request = new Request().masterNodeTimeout(TimeValue.timeValueSeconds(60)); + Request request = new Request().clusterManagerNodeTimeout(TimeValue.timeValueSeconds(60)); PlainActionFuture listener = new PlainActionFuture<>(); ClusterBlock block = new ClusterBlock(1, "", true, true, false, randomFrom(RestStatus.values()), ClusterBlockLevel.ALL); @@ -377,7 +377,7 @@ protected boolean localExecute(Request request) { } public void testClusterManagerNotAvailable() throws ExecutionException, InterruptedException { - Request request = new Request().masterNodeTimeout(TimeValue.timeValueSeconds(0)); + Request request = new Request().clusterManagerNodeTimeout(TimeValue.timeValueSeconds(0)); setState(clusterService, ClusterStateCreationUtils.state(localNode, null, allNodes)); PlainActionFuture listener = new PlainActionFuture<>(); new Action("internal:testAction", transportService, clusterService, threadPool).execute(request, listener); @@ -418,7 +418,7 @@ public void testDelegateToClusterManager() throws ExecutionException, Interrupte public void testDelegateToFailingClusterManager() throws ExecutionException, InterruptedException { boolean failsWithConnectTransportException = randomBoolean(); boolean rejoinSameClusterManager = failsWithConnectTransportException && randomBoolean(); - Request request = new Request().masterNodeTimeout(TimeValue.timeValueSeconds(failsWithConnectTransportException ? 60 : 0)); + Request request = new Request().clusterManagerNodeTimeout(TimeValue.timeValueSeconds(failsWithConnectTransportException ? 60 : 0)); DiscoveryNode clusterManagerNode = this.remoteNode; setState( clusterService, @@ -502,7 +502,7 @@ public void testDelegateToFailingClusterManager() throws ExecutionException, Int } public void testClusterManagerFailoverAfterStepDown() throws ExecutionException, InterruptedException { - Request request = new Request().masterNodeTimeout(TimeValue.timeValueHours(1)); + Request request = new Request().clusterManagerNodeTimeout(TimeValue.timeValueHours(1)); PlainActionFuture listener = new PlainActionFuture<>(); final Response response = new Response(); @@ -511,7 +511,8 @@ public void testClusterManagerFailoverAfterStepDown() throws ExecutionException, new Action("internal:testAction", transportService, clusterService, threadPool) { @Override - protected void masterOperation(Request request, ClusterState state, ActionListener listener) throws Exception { + protected void clusterManagerOperation(Request request, ClusterState state, ActionListener listener) + throws Exception { // The other node has become cluster-manager, simulate failures of this node while publishing cluster state through // ZenDiscovery setState(clusterService, ClusterStateCreationUtils.state(localNode, remoteNode, allNodes)); diff --git a/server/src/test/java/org/opensearch/action/support/clustermanager/TransportMasterNodeActionUtils.java b/server/src/test/java/org/opensearch/action/support/clustermanager/TransportMasterNodeActionUtils.java index 3927cd1d13040..b9abddc5622c9 100644 --- a/server/src/test/java/org/opensearch/action/support/clustermanager/TransportMasterNodeActionUtils.java +++ b/server/src/test/java/org/opensearch/action/support/clustermanager/TransportMasterNodeActionUtils.java @@ -39,7 +39,7 @@ public class TransportMasterNodeActionUtils { /** - * Allows to directly call {@link TransportClusterManagerNodeAction#masterOperation(ClusterManagerNodeRequest, ClusterState, ActionListener)} which is + * Allows to directly call {@link TransportClusterManagerNodeAction#clusterManagerOperation(ClusterManagerNodeRequest, ClusterState, ActionListener)} which is * a protected method. */ public static , Response extends ActionResponse> void runClusterManagerOperation( @@ -49,6 +49,6 @@ public static , Response exte ActionListener actionListener ) throws Exception { assert clusterManagerNodeAction.checkBlock(request, clusterState) == null; - clusterManagerNodeAction.masterOperation(request, clusterState, actionListener); + clusterManagerNodeAction.clusterManagerOperation(request, clusterState, actionListener); } } diff --git a/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java b/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java index bfc388119c609..c3a16a1e25bc8 100644 --- a/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java @@ -35,7 +35,7 @@ import org.opensearch.Version; import org.opensearch.action.ActionListener; import org.opensearch.action.admin.indices.alias.Alias; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.metadata.MetadataIndexTemplateService.PutRequest; import org.opensearch.cluster.service.ClusterService; diff --git a/server/src/test/java/org/opensearch/cluster/metadata/TemplateUpgradeServiceTests.java b/server/src/test/java/org/opensearch/cluster/metadata/TemplateUpgradeServiceTests.java index ed7195df367bc..1e52fa380793e 100644 --- a/server/src/test/java/org/opensearch/cluster/metadata/TemplateUpgradeServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/metadata/TemplateUpgradeServiceTests.java @@ -36,7 +36,7 @@ import org.opensearch.action.ActionListener; import org.opensearch.action.admin.indices.template.delete.DeleteIndexTemplateRequest; import org.opensearch.action.admin.indices.template.put.PutIndexTemplateRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.AdminClient; import org.opensearch.client.Client; import org.opensearch.client.IndicesAdminClient; diff --git a/server/src/test/java/org/opensearch/cluster/service/ClusterApplierServiceTests.java b/server/src/test/java/org/opensearch/cluster/service/ClusterApplierServiceTests.java index b9b939f28e365..8a7e14c63d3b0 100644 --- a/server/src/test/java/org/opensearch/cluster/service/ClusterApplierServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/service/ClusterApplierServiceTests.java @@ -298,12 +298,12 @@ public void testLocalNodeClusterManagerListenerCallbacks() { AtomicBoolean isClusterManager = new AtomicBoolean(); timedClusterApplierService.addLocalNodeMasterListener(new LocalNodeMasterListener() { @Override - public void onClusterManager() { + public void onMaster() { isClusterManager.set(true); } @Override - public void offClusterManager() { + public void offMaster() { isClusterManager.set(false); } }); diff --git a/server/src/test/java/org/opensearch/common/settings/ConsistentSettingsServiceTests.java b/server/src/test/java/org/opensearch/common/settings/ConsistentSettingsServiceTests.java index 8a872bc50aeb0..e7873723bec22 100644 --- a/server/src/test/java/org/opensearch/common/settings/ConsistentSettingsServiceTests.java +++ b/server/src/test/java/org/opensearch/common/settings/ConsistentSettingsServiceTests.java @@ -75,7 +75,7 @@ public void testSingleStringSetting() throws Exception { // hashes not yet published assertThat(new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).areAllConsistent(), is(false)); // publish - new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).newHashPublisher().onClusterManager(); + new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).newHashPublisher().onMaster(); ConsistentSettingsService consistentService = new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)); assertThat(consistentService.areAllConsistent(), is(true)); // change value @@ -83,7 +83,7 @@ public void testSingleStringSetting() throws Exception { assertThat(consistentService.areAllConsistent(), is(false)); assertThat(new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).areAllConsistent(), is(false)); // publish change - new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).newHashPublisher().onClusterManager(); + new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).newHashPublisher().onMaster(); assertThat(consistentService.areAllConsistent(), is(true)); assertThat(new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).areAllConsistent(), is(true)); } @@ -108,7 +108,7 @@ public void testSingleAffixSetting() throws Exception { is(false) ); // publish - new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).newHashPublisher().onClusterManager(); + new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).newHashPublisher().onMaster(); ConsistentSettingsService consistentService = new ConsistentSettingsService( settings, clusterService, @@ -123,7 +123,7 @@ public void testSingleAffixSetting() throws Exception { is(false) ); // publish change - new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).newHashPublisher().onClusterManager(); + new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).newHashPublisher().onMaster(); assertThat(consistentService.areAllConsistent(), is(true)); assertThat(new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).areAllConsistent(), is(true)); // add value @@ -136,7 +136,7 @@ public void testSingleAffixSetting() throws Exception { is(false) ); // publish - new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).newHashPublisher().onClusterManager(); + new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).newHashPublisher().onMaster(); assertThat(new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).areAllConsistent(), is(true)); // remove value secureSettings = new MockSecureSettings(); @@ -173,7 +173,7 @@ public void testStringAndAffixSettings() throws Exception { is(false) ); // publish only the simple string setting - new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).newHashPublisher().onClusterManager(); + new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).newHashPublisher().onMaster(); assertThat(new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).areAllConsistent(), is(true)); assertThat( new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).areAllConsistent(), @@ -184,7 +184,7 @@ public void testStringAndAffixSettings() throws Exception { is(false) ); // publish only the affix string setting - new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).newHashPublisher().onClusterManager(); + new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).newHashPublisher().onMaster(); assertThat(new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).areAllConsistent(), is(false)); assertThat(new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).areAllConsistent(), is(true)); assertThat( @@ -193,7 +193,7 @@ public void testStringAndAffixSettings() throws Exception { ); // publish both settings new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting, affixStringSetting)).newHashPublisher() - .onClusterManager(); + .onMaster(); assertThat(new ConsistentSettingsService(settings, clusterService, Arrays.asList(stringSetting)).areAllConsistent(), is(true)); assertThat(new ConsistentSettingsService(settings, clusterService, Arrays.asList(affixStringSetting)).areAllConsistent(), is(true)); assertThat( diff --git a/server/src/test/java/org/opensearch/index/store/StoreTests.java b/server/src/test/java/org/opensearch/index/store/StoreTests.java index d99bde4764adf..b6bced9f038c0 100644 --- a/server/src/test/java/org/opensearch/index/store/StoreTests.java +++ b/server/src/test/java/org/opensearch/index/store/StoreTests.java @@ -31,7 +31,6 @@ package org.opensearch.index.store; -import org.apache.lucene.tests.analysis.MockAnalyzer; import org.apache.lucene.codecs.CodecUtil; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; @@ -51,7 +50,6 @@ import org.apache.lucene.index.SegmentInfos; import org.apache.lucene.index.SnapshotDeletionPolicy; import org.apache.lucene.index.Term; -import org.apache.lucene.tests.store.BaseDirectoryWrapper; import org.apache.lucene.store.ByteBuffersDirectory; import org.apache.lucene.store.ChecksumIndexInput; import org.apache.lucene.store.Directory; @@ -60,9 +58,12 @@ import org.apache.lucene.store.IndexInput; import org.apache.lucene.store.IndexOutput; import org.apache.lucene.store.NIOFSDirectory; -import org.apache.lucene.util.BytesRef; +import org.apache.lucene.tests.analysis.MockAnalyzer; +import org.apache.lucene.tests.store.BaseDirectoryWrapper; import org.apache.lucene.tests.util.TestUtil; +import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.Version; +import org.hamcrest.Matchers; import org.opensearch.ExceptionsHelper; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.common.UUIDs; @@ -81,9 +82,8 @@ import org.opensearch.index.shard.ShardId; import org.opensearch.indices.store.TransportNodesListShardStoreMetadata; import org.opensearch.test.DummyShardLock; -import org.opensearch.test.OpenSearchTestCase; import org.opensearch.test.IndexSettingsModule; -import org.hamcrest.Matchers; +import org.opensearch.test.OpenSearchTestCase; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; @@ -102,7 +102,6 @@ import java.util.concurrent.atomic.AtomicInteger; import static java.util.Collections.unmodifiableMap; -import static org.opensearch.test.VersionUtils.randomVersion; import static org.hamcrest.Matchers.anyOf; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.empty; @@ -114,6 +113,7 @@ import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.not; import static org.hamcrest.Matchers.notNullValue; +import static org.opensearch.test.VersionUtils.randomVersion; public class StoreTests extends OpenSearchTestCase { @@ -1149,4 +1149,43 @@ public void testGetMetadataWithSegmentInfos() throws IOException { assertEquals(segmentInfos.getSegmentsFileName(), metadataSnapshot.getSegmentsFile().name()); store.close(); } + + public void testcleanupAndPreserveLatestCommitPoint() throws IOException { + final ShardId shardId = new ShardId("index", "_na_", 1); + Store store = new Store(shardId, INDEX_SETTINGS, StoreTests.newDirectory(random()), new DummyShardLock(shardId)); + IndexWriterConfig indexWriterConfig = newIndexWriterConfig(random(), new MockAnalyzer(random())).setCodec( + TestUtil.getDefaultCodec() + ); + indexWriterConfig.setIndexDeletionPolicy(NoDeletionPolicy.INSTANCE); + IndexWriter writer = new IndexWriter(store.directory(), indexWriterConfig); + int docs = 1 + random().nextInt(100); + writer.commit(); + Document doc = new Document(); + doc.add(new TextField("id", "" + docs++, random().nextBoolean() ? Field.Store.YES : Field.Store.NO)); + doc.add( + new TextField( + "body", + TestUtil.randomRealisticUnicodeString(random()), + random().nextBoolean() ? Field.Store.YES : Field.Store.NO + ) + ); + doc.add(new SortedDocValuesField("dv", new BytesRef(TestUtil.randomRealisticUnicodeString(random())))); + writer.addDocument(doc); + writer.commit(); + writer.close(); + + Store.MetadataSnapshot commitMetadata = store.getMetadata(); + + Store.MetadataSnapshot refreshMetadata = Store.MetadataSnapshot.EMPTY; + + store.cleanupAndPreserveLatestCommitPoint("test", refreshMetadata); + + // we want to ensure commitMetadata files are preserved after calling cleanup + for (String existingFile : store.directory().listAll()) { + assert (commitMetadata.contains(existingFile) == true); + } + + deleteContent(store.directory()); + IOUtils.close(store); + } } diff --git a/server/src/test/java/org/opensearch/index/translog/InternalTranslogManagerTests.java b/server/src/test/java/org/opensearch/index/translog/InternalTranslogManagerTests.java new file mode 100644 index 0000000000000..4db792b4a3fc2 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/translog/InternalTranslogManagerTests.java @@ -0,0 +1,279 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.translog; + +import org.opensearch.common.util.BigArrays; +import org.opensearch.common.util.concurrent.ReleasableLock; +import org.opensearch.index.engine.Engine; +import org.opensearch.index.mapper.ParsedDocument; +import org.opensearch.index.seqno.LocalCheckpointTracker; +import org.opensearch.index.seqno.SequenceNumbers; +import org.opensearch.index.translog.listener.TranslogEventListener; + +import java.io.IOException; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import java.util.concurrent.locks.ReentrantReadWriteLock; + +import static org.hamcrest.Matchers.equalTo; +import static org.opensearch.index.seqno.SequenceNumbers.NO_OPS_PERFORMED; +import static org.opensearch.index.translog.TranslogDeletionPolicies.createTranslogDeletionPolicy; + +public class InternalTranslogManagerTests extends TranslogManagerTestCase { + + public void testRecoveryFromTranslog() throws IOException { + final AtomicLong globalCheckpoint = new AtomicLong(SequenceNumbers.NO_OPS_PERFORMED); + final AtomicBoolean beginTranslogRecoveryInvoked = new AtomicBoolean(false); + final AtomicBoolean onTranslogRecoveryInvoked = new AtomicBoolean(false); + TranslogManager translogManager = null; + + LocalCheckpointTracker tracker = new LocalCheckpointTracker(NO_OPS_PERFORMED, NO_OPS_PERFORMED); + try { + translogManager = new InternalTranslogManager( + new TranslogConfig(shardId, primaryTranslogDir, INDEX_SETTINGS, BigArrays.NON_RECYCLING_INSTANCE), + primaryTerm, + globalCheckpoint::get, + createTranslogDeletionPolicy(INDEX_SETTINGS), + shardId, + new ReleasableLock(new ReentrantReadWriteLock().readLock()), + () -> tracker, + translogUUID, + TranslogEventListener.NOOP_TRANSLOG_EVENT_LISTENER, + () -> {} + ); + final int docs = randomIntBetween(1, 100); + for (int i = 0; i < docs; i++) { + final String id = Integer.toString(i); + final ParsedDocument doc = testParsedDocument(id, null, testDocumentWithTextField(), SOURCE, null); + Engine.Index index = indexForDoc(doc); + Engine.IndexResult indexResult = new Engine.IndexResult(index.version(), index.primaryTerm(), i, true); + tracker.markSeqNoAsProcessed(i); + translogManager.getTranslog(false).add(new Translog.Index(index, indexResult)); + translogManager.rollTranslogGeneration(); + } + long maxSeqNo = tracker.getMaxSeqNo(); + assertEquals(maxSeqNo + 1, translogManager.getTranslogStats().getUncommittedOperations()); + assertEquals(maxSeqNo + 1, translogManager.getTranslogStats().estimatedNumberOfOperations()); + + translogManager.syncTranslog(); + translogManager.getTranslog(false).close(); + translogManager = new InternalTranslogManager( + new TranslogConfig(shardId, primaryTranslogDir, INDEX_SETTINGS, BigArrays.NON_RECYCLING_INSTANCE), + primaryTerm, + globalCheckpoint::get, + createTranslogDeletionPolicy(INDEX_SETTINGS), + shardId, + new ReleasableLock(new ReentrantReadWriteLock().readLock()), + () -> new LocalCheckpointTracker(NO_OPS_PERFORMED, NO_OPS_PERFORMED), + translogUUID, + new TranslogEventListener() { + @Override + public void onAfterTranslogRecovery() { + onTranslogRecoveryInvoked.set(true); + } + + @Override + public void onBeginTranslogRecovery() { + beginTranslogRecoveryInvoked.set(true); + } + }, + () -> {} + ); + AtomicInteger opsRecovered = new AtomicInteger(); + int opsRecoveredFromTranslog = translogManager.recoverFromTranslog((snapshot) -> { + Translog.Operation operation; + while ((operation = snapshot.next()) != null) { + opsRecovered.incrementAndGet(); + } + return opsRecovered.get(); + }, NO_OPS_PERFORMED, Long.MAX_VALUE); + + assertEquals(maxSeqNo + 1, opsRecovered.get()); + assertEquals(maxSeqNo + 1, opsRecoveredFromTranslog); + + assertTrue(beginTranslogRecoveryInvoked.get()); + assertTrue(onTranslogRecoveryInvoked.get()); + + } finally { + translogManager.getTranslog(false).close(); + } + } + + public void testTranslogRollsGeneration() throws IOException { + final AtomicLong globalCheckpoint = new AtomicLong(SequenceNumbers.NO_OPS_PERFORMED); + TranslogManager translogManager = null; + LocalCheckpointTracker tracker = new LocalCheckpointTracker(NO_OPS_PERFORMED, NO_OPS_PERFORMED); + try { + translogManager = new InternalTranslogManager( + new TranslogConfig(shardId, primaryTranslogDir, INDEX_SETTINGS, BigArrays.NON_RECYCLING_INSTANCE), + primaryTerm, + globalCheckpoint::get, + createTranslogDeletionPolicy(INDEX_SETTINGS), + shardId, + new ReleasableLock(new ReentrantReadWriteLock().readLock()), + () -> tracker, + translogUUID, + TranslogEventListener.NOOP_TRANSLOG_EVENT_LISTENER, + () -> {} + ); + final int docs = randomIntBetween(1, 100); + for (int i = 0; i < docs; i++) { + final String id = Integer.toString(i); + final ParsedDocument doc = testParsedDocument(id, null, testDocumentWithTextField(), SOURCE, null); + Engine.Index index = indexForDoc(doc); + Engine.IndexResult indexResult = new Engine.IndexResult(index.version(), index.primaryTerm(), i, true); + tracker.markSeqNoAsProcessed(i); + translogManager.getTranslog(false).add(new Translog.Index(index, indexResult)); + translogManager.rollTranslogGeneration(); + } + long maxSeqNo = tracker.getMaxSeqNo(); + assertEquals(maxSeqNo + 1, translogManager.getTranslogStats().getUncommittedOperations()); + assertEquals(maxSeqNo + 1, translogManager.getTranslogStats().estimatedNumberOfOperations()); + + translogManager.syncTranslog(); + translogManager.getTranslog(false).close(); + translogManager = new InternalTranslogManager( + new TranslogConfig(shardId, primaryTranslogDir, INDEX_SETTINGS, BigArrays.NON_RECYCLING_INSTANCE), + primaryTerm, + globalCheckpoint::get, + createTranslogDeletionPolicy(INDEX_SETTINGS), + shardId, + new ReleasableLock(new ReentrantReadWriteLock().readLock()), + () -> new LocalCheckpointTracker(NO_OPS_PERFORMED, NO_OPS_PERFORMED), + translogUUID, + TranslogEventListener.NOOP_TRANSLOG_EVENT_LISTENER, + () -> {} + ); + AtomicInteger opsRecovered = new AtomicInteger(); + int opsRecoveredFromTranslog = translogManager.recoverFromTranslog((snapshot) -> { + Translog.Operation operation; + while ((operation = snapshot.next()) != null) { + opsRecovered.incrementAndGet(); + } + return opsRecovered.get(); + }, NO_OPS_PERFORMED, Long.MAX_VALUE); + + assertEquals(maxSeqNo + 1, opsRecovered.get()); + assertEquals(maxSeqNo + 1, opsRecoveredFromTranslog); + } finally { + translogManager.getTranslog(false).close(); + } + } + + public void testTrimOperationsFromTranslog() throws IOException { + final AtomicLong globalCheckpoint = new AtomicLong(SequenceNumbers.NO_OPS_PERFORMED); + TranslogManager translogManager = null; + LocalCheckpointTracker tracker = new LocalCheckpointTracker(NO_OPS_PERFORMED, NO_OPS_PERFORMED); + try { + translogManager = new InternalTranslogManager( + new TranslogConfig(shardId, primaryTranslogDir, INDEX_SETTINGS, BigArrays.NON_RECYCLING_INSTANCE), + primaryTerm, + globalCheckpoint::get, + createTranslogDeletionPolicy(INDEX_SETTINGS), + shardId, + new ReleasableLock(new ReentrantReadWriteLock().readLock()), + () -> tracker, + translogUUID, + TranslogEventListener.NOOP_TRANSLOG_EVENT_LISTENER, + () -> {} + ); + final int docs = randomIntBetween(1, 100); + for (int i = 0; i < docs; i++) { + final String id = Integer.toString(i); + final ParsedDocument doc = testParsedDocument(id, null, testDocumentWithTextField(), SOURCE, null); + Engine.Index index = indexForDoc(doc); + Engine.IndexResult indexResult = new Engine.IndexResult(index.version(), index.primaryTerm(), i, true); + tracker.markSeqNoAsProcessed(i); + translogManager.getTranslog(false).add(new Translog.Index(index, indexResult)); + } + long maxSeqNo = tracker.getMaxSeqNo(); + assertEquals(maxSeqNo + 1, translogManager.getTranslogStats().getUncommittedOperations()); + assertEquals(maxSeqNo + 1, translogManager.getTranslogStats().estimatedNumberOfOperations()); + + primaryTerm.set(randomLongBetween(primaryTerm.get(), Long.MAX_VALUE)); + translogManager.rollTranslogGeneration(); + translogManager.trimOperationsFromTranslog(primaryTerm.get(), NO_OPS_PERFORMED); // trim everything in translog + + translogManager.getTranslog(false).close(); + translogManager = new InternalTranslogManager( + new TranslogConfig(shardId, primaryTranslogDir, INDEX_SETTINGS, BigArrays.NON_RECYCLING_INSTANCE), + primaryTerm, + globalCheckpoint::get, + createTranslogDeletionPolicy(INDEX_SETTINGS), + shardId, + new ReleasableLock(new ReentrantReadWriteLock().readLock()), + () -> new LocalCheckpointTracker(NO_OPS_PERFORMED, NO_OPS_PERFORMED), + translogUUID, + TranslogEventListener.NOOP_TRANSLOG_EVENT_LISTENER, + () -> {} + ); + AtomicInteger opsRecovered = new AtomicInteger(); + int opsRecoveredFromTranslog = translogManager.recoverFromTranslog((snapshot) -> { + Translog.Operation operation; + while ((operation = snapshot.next()) != null) { + opsRecovered.incrementAndGet(); + } + return opsRecovered.get(); + }, NO_OPS_PERFORMED, Long.MAX_VALUE); + + assertEquals(0, opsRecovered.get()); + assertEquals(0, opsRecoveredFromTranslog); + } finally { + translogManager.getTranslog(false).close(); + } + } + + public void testTranslogSync() throws IOException { + final AtomicLong globalCheckpoint = new AtomicLong(SequenceNumbers.NO_OPS_PERFORMED); + AtomicBoolean syncListenerInvoked = new AtomicBoolean(); + TranslogManager translogManager = null; + final AtomicInteger maxSeqNo = new AtomicInteger(randomIntBetween(0, 128)); + final AtomicInteger localCheckpoint = new AtomicInteger(randomIntBetween(0, maxSeqNo.get())); + try { + ParsedDocument doc = testParsedDocument("1", null, testDocumentWithTextField(), B_1, null); + AtomicReference translogManagerAtomicReference = new AtomicReference<>(); + translogManager = new InternalTranslogManager( + new TranslogConfig(shardId, primaryTranslogDir, INDEX_SETTINGS, BigArrays.NON_RECYCLING_INSTANCE), + primaryTerm, + globalCheckpoint::get, + createTranslogDeletionPolicy(INDEX_SETTINGS), + shardId, + new ReleasableLock(new ReentrantReadWriteLock().readLock()), + () -> new LocalCheckpointTracker(maxSeqNo.get(), localCheckpoint.get()), + translogUUID, + new TranslogEventListener() { + @Override + public void onAfterTranslogSync() { + try { + translogManagerAtomicReference.get().getTranslog(false).trimUnreferencedReaders(); + syncListenerInvoked.set(true); + } catch (IOException ex) { + fail("Failed due to " + ex); + } + } + }, + () -> {} + ); + translogManagerAtomicReference.set(translogManager); + Engine.Index index = indexForDoc(doc); + Engine.IndexResult indexResult = new Engine.IndexResult(index.version(), index.primaryTerm(), 1, false); + translogManager.getTranslog(false).add(new Translog.Index(index, indexResult)); + + translogManager.syncTranslog(); + + assertThat(translogManager.getTranslog(true).currentFileGeneration(), equalTo(2L)); + assertThat(translogManager.getTranslog(true).getMinFileGeneration(), equalTo(2L)); + assertTrue(syncListenerInvoked.get()); + } finally { + translogManager.getTranslog(false).close(); + } + } +} diff --git a/server/src/test/java/org/opensearch/index/translog/TranslogManagerTestCase.java b/server/src/test/java/org/opensearch/index/translog/TranslogManagerTestCase.java new file mode 100644 index 0000000000000..25867cdb666ad --- /dev/null +++ b/server/src/test/java/org/opensearch/index/translog/TranslogManagerTestCase.java @@ -0,0 +1,217 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.translog; + +import org.apache.lucene.document.Field; +import org.apache.lucene.document.NumericDocValuesField; +import org.apache.lucene.document.StoredField; +import org.apache.lucene.document.TextField; +import org.apache.lucene.index.Term; +import org.apache.lucene.util.BytesRef; +import org.junit.After; +import org.junit.Before; +import org.opensearch.Version; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.cluster.routing.AllocationId; +import org.opensearch.common.bytes.BytesArray; +import org.opensearch.common.bytes.BytesReference; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.BigArrays; +import org.opensearch.common.xcontent.XContentType; +import org.opensearch.core.internal.io.IOUtils; +import org.opensearch.index.Index; +import org.opensearch.index.IndexSettings; +import org.opensearch.index.engine.Engine; +import org.opensearch.index.engine.EngineConfig; +import org.opensearch.index.mapper.ParseContext; +import org.opensearch.index.mapper.Mapping; +import org.opensearch.index.mapper.ParsedDocument; +import org.opensearch.index.mapper.SourceFieldMapper; +import org.opensearch.index.mapper.SeqNoFieldMapper; +import org.opensearch.index.mapper.Uid; +import org.opensearch.index.mapper.IdFieldMapper; +import org.opensearch.index.seqno.SequenceNumbers; +import org.opensearch.index.shard.ShardId; +import org.opensearch.test.IndexSettingsModule; +import org.opensearch.test.OpenSearchTestCase; +import org.opensearch.threadpool.TestThreadPool; +import org.opensearch.threadpool.ThreadPool; + +import java.io.IOException; +import java.nio.charset.Charset; +import java.nio.file.Path; +import java.util.List; +import java.util.concurrent.atomic.AtomicLong; +import java.util.function.LongSupplier; + +import static org.opensearch.index.translog.TranslogDeletionPolicies.createTranslogDeletionPolicy; + +public abstract class TranslogManagerTestCase extends OpenSearchTestCase { + + protected final ShardId shardId = new ShardId(new Index("index", "_na_"), 0); + protected final AllocationId allocationId = AllocationId.newInitializing(); + protected static final IndexSettings INDEX_SETTINGS = IndexSettingsModule.newIndexSettings("index", Settings.EMPTY); + private AtomicLong globalCheckpoint; + protected ThreadPool threadPool; + protected final PrimaryTermSupplier primaryTerm = new PrimaryTermSupplier(1L); + + protected IndexSettings defaultSettings; + protected String codecName; + protected Path primaryTranslogDir; + protected String translogUUID; + + protected static final BytesArray SOURCE = bytesArray("{}"); + protected static final BytesReference B_1 = new BytesArray(new byte[] { 1 }); + + protected Translog createTranslog(LongSupplier primaryTermSupplier) throws IOException { + return createTranslog(primaryTranslogDir, primaryTermSupplier); + } + + protected Translog createTranslog(Path translogPath, LongSupplier primaryTermSupplier) throws IOException { + TranslogConfig translogConfig = new TranslogConfig(shardId, translogPath, INDEX_SETTINGS, BigArrays.NON_RECYCLING_INSTANCE); + String translogUUID = Translog.createEmptyTranslog( + translogPath, + SequenceNumbers.NO_OPS_PERFORMED, + shardId, + primaryTermSupplier.getAsLong() + ); + return new Translog( + translogConfig, + translogUUID, + createTranslogDeletionPolicy(INDEX_SETTINGS), + () -> SequenceNumbers.NO_OPS_PERFORMED, + primaryTermSupplier, + seqNo -> {} + ); + } + + private String create(Path path) throws IOException { + globalCheckpoint = new AtomicLong(SequenceNumbers.NO_OPS_PERFORMED); + return Translog.createEmptyTranslog(path, SequenceNumbers.NO_OPS_PERFORMED, shardId, primaryTerm.get()); + } + + @Override + @Before + public void setUp() throws Exception { + super.setUp(); + primaryTerm.set(randomIntBetween(1, 100)); + defaultSettings = IndexSettingsModule.newIndexSettings("test", indexSettings()); + threadPool = new TestThreadPool(getClass().getName()); + primaryTranslogDir = createTempDir("translog-primary"); + translogUUID = create(primaryTranslogDir); + } + + @Override + @After + public void tearDown() throws Exception { + super.tearDown(); + IOUtils.close(() -> terminate(threadPool)); + } + + protected Settings indexSettings() { + // TODO randomize more settings + return Settings.builder() + .put(IndexSettings.INDEX_GC_DELETES_SETTING.getKey(), "1h") // make sure this doesn't kick in on us + .put(EngineConfig.INDEX_CODEC_SETTING.getKey(), codecName) + .put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT) + .put( + IndexSettings.MAX_REFRESH_LISTENERS_PER_SHARD.getKey(), + between(10, 10 * IndexSettings.MAX_REFRESH_LISTENERS_PER_SHARD.get(Settings.EMPTY)) + ) + .put(IndexSettings.INDEX_SOFT_DELETES_RETENTION_OPERATIONS_SETTING.getKey(), between(0, 1000)) + .build(); + } + + public static final class PrimaryTermSupplier implements LongSupplier { + private final AtomicLong term; + + PrimaryTermSupplier(long initialTerm) { + this.term = new AtomicLong(initialTerm); + } + + public long get() { + return term.get(); + } + + public void set(long newTerm) { + this.term.set(newTerm); + } + + @Override + public long getAsLong() { + return get(); + } + } + + protected static ParsedDocument testParsedDocument( + String id, + String routing, + ParseContext.Document document, + BytesReference source, + Mapping mappingUpdate + ) { + return testParsedDocument(id, routing, document, source, mappingUpdate, false); + } + + protected static ParsedDocument testParsedDocument( + String id, + String routing, + ParseContext.Document document, + BytesReference source, + Mapping mappingUpdate, + boolean recoverySource + ) { + Field uidField = new Field("_id", Uid.encodeId(id), IdFieldMapper.Defaults.FIELD_TYPE); + Field versionField = new NumericDocValuesField("_version", 0); + SeqNoFieldMapper.SequenceIDFields seqID = SeqNoFieldMapper.SequenceIDFields.emptySeqID(); + document.add(uidField); + document.add(versionField); + document.add(seqID.seqNo); + document.add(seqID.seqNoDocValue); + document.add(seqID.primaryTerm); + BytesRef ref = source.toBytesRef(); + if (recoverySource) { + document.add(new StoredField(SourceFieldMapper.RECOVERY_SOURCE_NAME, ref.bytes, ref.offset, ref.length)); + document.add(new NumericDocValuesField(SourceFieldMapper.RECOVERY_SOURCE_NAME, 1)); + } else { + document.add(new StoredField(SourceFieldMapper.NAME, ref.bytes, ref.offset, ref.length)); + } + return new ParsedDocument(versionField, seqID, id, routing, List.of(document), source, XContentType.JSON, mappingUpdate); + } + + protected static ParseContext.Document testDocumentWithTextField() { + return testDocumentWithTextField("test"); + } + + protected static ParseContext.Document testDocumentWithTextField(String value) { + ParseContext.Document document = testDocument(); + document.add(new TextField("value", value, Field.Store.YES)); + return document; + } + + protected static ParseContext.Document testDocument() { + return new ParseContext.Document(); + } + + protected Engine.Index indexForDoc(ParsedDocument doc) { + return new Engine.Index(newUid(doc), primaryTerm.get(), doc); + } + + public static Term newUid(String id) { + return new Term("_id", Uid.encodeId(id)); + } + + public static Term newUid(ParsedDocument doc) { + return newUid(doc.id()); + } + + protected static BytesArray bytesArray(String string) { + return new BytesArray(string.getBytes(Charset.defaultCharset())); + } +} diff --git a/server/src/test/java/org/opensearch/index/translog/listener/TranslogListenerTests.java b/server/src/test/java/org/opensearch/index/translog/listener/TranslogListenerTests.java new file mode 100644 index 0000000000000..79c243772b252 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/translog/listener/TranslogListenerTests.java @@ -0,0 +1,126 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.translog.listener; + +import org.apache.lucene.store.AlreadyClosedException; +import org.opensearch.test.OpenSearchTestCase; + +import java.lang.reflect.Proxy; +import java.util.*; +import java.util.concurrent.atomic.AtomicInteger; + +public class TranslogListenerTests extends OpenSearchTestCase { + + public void testCompositeTranslogEventListener() { + AtomicInteger onTranslogSyncInvoked = new AtomicInteger(); + AtomicInteger onTranslogRecoveryInvoked = new AtomicInteger(); + AtomicInteger onBeginTranslogRecoveryInvoked = new AtomicInteger(); + AtomicInteger onFailureInvoked = new AtomicInteger(); + AtomicInteger onTragicFailureInvoked = new AtomicInteger(); + + TranslogEventListener listener = new TranslogEventListener() { + @Override + public void onAfterTranslogSync() { + onTranslogSyncInvoked.incrementAndGet(); + } + + @Override + public void onAfterTranslogRecovery() { + onTranslogRecoveryInvoked.incrementAndGet(); + } + + @Override + public void onBeginTranslogRecovery() { + onBeginTranslogRecoveryInvoked.incrementAndGet(); + } + + @Override + public void onFailure(String reason, Exception ex) { + onFailureInvoked.incrementAndGet(); + } + + @Override + public void onTragicFailure(AlreadyClosedException ex) { + onTragicFailureInvoked.incrementAndGet(); + } + }; + + final List translogEventListeners = new ArrayList<>(Arrays.asList(listener, listener)); + Collections.shuffle(translogEventListeners, random()); + TranslogEventListener compositeListener = new CompositeTranslogEventListener(translogEventListeners); + compositeListener.onAfterTranslogRecovery(); + compositeListener.onAfterTranslogSync(); + compositeListener.onBeginTranslogRecovery(); + compositeListener.onFailure("reason", new RuntimeException("reason")); + compositeListener.onTragicFailure(new AlreadyClosedException("reason")); + + assertEquals(2, onBeginTranslogRecoveryInvoked.get()); + assertEquals(2, onTranslogRecoveryInvoked.get()); + assertEquals(2, onTranslogSyncInvoked.get()); + assertEquals(2, onFailureInvoked.get()); + assertEquals(2, onTragicFailureInvoked.get()); + } + + public void testCompositeTranslogEventListenerOnExceptions() { + AtomicInteger onTranslogSyncInvoked = new AtomicInteger(); + AtomicInteger onTranslogRecoveryInvoked = new AtomicInteger(); + AtomicInteger onBeginTranslogRecoveryInvoked = new AtomicInteger(); + AtomicInteger onFailureInvoked = new AtomicInteger(); + AtomicInteger onTragicFailureInvoked = new AtomicInteger(); + + TranslogEventListener listener = new TranslogEventListener() { + @Override + public void onAfterTranslogSync() { + onTranslogSyncInvoked.incrementAndGet(); + } + + @Override + public void onAfterTranslogRecovery() { + onTranslogRecoveryInvoked.incrementAndGet(); + } + + @Override + public void onBeginTranslogRecovery() { + onBeginTranslogRecoveryInvoked.incrementAndGet(); + } + + @Override + public void onFailure(String reason, Exception ex) { + onFailureInvoked.incrementAndGet(); + } + + @Override + public void onTragicFailure(AlreadyClosedException ex) { + onTragicFailureInvoked.incrementAndGet(); + } + }; + + TranslogEventListener throwingListener = (TranslogEventListener) Proxy.newProxyInstance( + TranslogEventListener.class.getClassLoader(), + new Class[] { TranslogEventListener.class }, + (a, b, c) -> { throw new RuntimeException(); } + ); + + final List translogEventListeners = new LinkedList<>(Arrays.asList(listener, throwingListener, listener)); + Collections.shuffle(translogEventListeners, random()); + TranslogEventListener compositeListener = new CompositeTranslogEventListener(translogEventListeners); + expectThrows(RuntimeException.class, () -> compositeListener.onAfterTranslogRecovery()); + expectThrows(RuntimeException.class, () -> compositeListener.onAfterTranslogSync()); + expectThrows(RuntimeException.class, () -> compositeListener.onBeginTranslogRecovery()); + expectThrows(RuntimeException.class, () -> compositeListener.onFailure("reason", new RuntimeException("reason"))); + expectThrows(RuntimeException.class, () -> compositeListener.onTragicFailure(new AlreadyClosedException("reason"))); + + assertEquals(2, onBeginTranslogRecoveryInvoked.get()); + assertEquals(2, onTranslogRecoveryInvoked.get()); + assertEquals(2, onTranslogSyncInvoked.get()); + assertEquals(2, onFailureInvoked.get()); + assertEquals(2, onTragicFailureInvoked.get()); + + } +} diff --git a/server/src/test/java/org/opensearch/indices/recovery/RecoverySourceHandlerTests.java b/server/src/test/java/org/opensearch/indices/recovery/RecoverySourceHandlerTests.java index fc5c429d74b16..2b5550b71a627 100644 --- a/server/src/test/java/org/opensearch/indices/recovery/RecoverySourceHandlerTests.java +++ b/server/src/test/java/org/opensearch/indices/recovery/RecoverySourceHandlerTests.java @@ -94,6 +94,7 @@ import org.opensearch.index.store.Store; import org.opensearch.index.store.StoreFileMetadata; import org.opensearch.index.translog.Translog; +import org.opensearch.indices.RunUnderPrimaryPermit; import org.opensearch.indices.replication.common.ReplicationLuceneIndex; import org.opensearch.test.CorruptionUtils; import org.opensearch.test.DummyShardLock; @@ -544,21 +545,22 @@ public void writeFileChunk( }); } }; + IndexShard mockShard = mock(IndexShard.class); + when(mockShard.shardId()).thenReturn(new ShardId("testIndex", "testUUID", 0)); + doAnswer(invocation -> { + assertFalse(failedEngine.get()); + failedEngine.set(true); + return null; + }).when(mockShard).failShard(any(), any()); RecoverySourceHandler handler = new RecoverySourceHandler( - null, + mockShard, new AsyncRecoveryTarget(target, recoveryExecutor), threadPool, request, Math.toIntExact(recoverySettings.getChunkSize().getBytes()), between(1, 8), between(1, 8) - ) { - @Override - protected void failEngine(IOException cause) { - assertFalse(failedEngine.get()); - failedEngine.set(true); - } - }; + ); SetOnce sendFilesError = new SetOnce<>(); CountDownLatch latch = new CountDownLatch(1); handler.sendFiles( @@ -570,6 +572,7 @@ protected void failEngine(IOException cause) { latch.await(); assertThat(sendFilesError.get(), instanceOf(IOException.class)); assertNotNull(ExceptionsHelper.unwrapCorruption(sendFilesError.get())); + failedEngine.get(); assertTrue(failedEngine.get()); // ensure all chunk requests have been completed; otherwise some files on the target are left open. IOUtils.close(() -> terminate(threadPool), () -> threadPool = null); @@ -617,21 +620,22 @@ public void writeFileChunk( } } }; + IndexShard mockShard = mock(IndexShard.class); + when(mockShard.shardId()).thenReturn(new ShardId("testIndex", "testUUID", 0)); + doAnswer(invocation -> { + assertFalse(failedEngine.get()); + failedEngine.set(true); + return null; + }).when(mockShard).failShard(any(), any()); RecoverySourceHandler handler = new RecoverySourceHandler( - null, + mockShard, new AsyncRecoveryTarget(target, recoveryExecutor), threadPool, request, Math.toIntExact(recoverySettings.getChunkSize().getBytes()), between(1, 10), between(1, 4) - ) { - @Override - protected void failEngine(IOException cause) { - assertFalse(failedEngine.get()); - failedEngine.set(true); - } - }; + ); PlainActionFuture sendFilesFuture = new PlainActionFuture<>(); handler.sendFiles(store, metas.toArray(new StoreFileMetadata[0]), () -> 0, sendFilesFuture); Exception ex = expectThrows(Exception.class, sendFilesFuture::actionGet); @@ -747,7 +751,7 @@ public void testCancellationsDoesNotLeakPrimaryPermits() throws Exception { Thread cancelingThread = new Thread(() -> cancellableThreads.cancel("test")); cancelingThread.start(); try { - RecoverySourceHandler.runUnderPrimaryPermit(() -> {}, "test", shard, cancellableThreads, logger); + RunUnderPrimaryPermit.run(() -> {}, "test", shard, cancellableThreads, logger); } catch (CancellableThreads.ExecutionCancelledException e) { // expected. } diff --git a/server/src/test/java/org/opensearch/indices/replication/OngoingSegmentReplicationsTests.java b/server/src/test/java/org/opensearch/indices/replication/OngoingSegmentReplicationsTests.java new file mode 100644 index 0000000000000..260f6a13b5010 --- /dev/null +++ b/server/src/test/java/org/opensearch/indices/replication/OngoingSegmentReplicationsTests.java @@ -0,0 +1,231 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.replication; + +import org.junit.Assert; +import org.opensearch.OpenSearchException; +import org.opensearch.action.ActionListener; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Settings; +import org.opensearch.index.IndexService; +import org.opensearch.index.shard.IndexShard; +import org.opensearch.index.shard.IndexShardTestCase; +import org.opensearch.index.shard.ShardId; +import org.opensearch.index.store.StoreFileMetadata; +import org.opensearch.indices.IndicesService; +import org.opensearch.indices.recovery.FileChunkWriter; +import org.opensearch.indices.recovery.RecoverySettings; +import org.opensearch.indices.replication.checkpoint.ReplicationCheckpoint; +import org.opensearch.indices.replication.common.CopyState; +import org.opensearch.transport.TransportService; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class OngoingSegmentReplicationsTests extends IndexShardTestCase { + + private final IndicesService mockIndicesService = mock(IndicesService.class); + private ReplicationCheckpoint testCheckpoint; + private DiscoveryNode primaryDiscoveryNode; + private DiscoveryNode replicaDiscoveryNode; + private IndexShard primary; + private IndexShard replica; + + private GetSegmentFilesRequest getSegmentFilesRequest; + + final Settings settings = Settings.builder().put("node.name", SegmentReplicationTargetServiceTests.class.getSimpleName()).build(); + final ClusterSettings clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + final RecoverySettings recoverySettings = new RecoverySettings(settings, clusterSettings); + + @Override + public void setUp() throws Exception { + super.setUp(); + primary = newStartedShard(true); + replica = newShard(primary.shardId(), false); + recoverReplica(replica, primary, true); + replicaDiscoveryNode = replica.recoveryState().getTargetNode(); + primaryDiscoveryNode = replica.recoveryState().getSourceNode(); + + ShardId testShardId = primary.shardId(); + + // This mirrors the creation of the ReplicationCheckpoint inside CopyState + testCheckpoint = new ReplicationCheckpoint( + testShardId, + primary.getOperationPrimaryTerm(), + 0L, + primary.getProcessedLocalCheckpoint(), + 0L + ); + IndexService mockIndexService = mock(IndexService.class); + when(mockIndicesService.indexService(testShardId.getIndex())).thenReturn(mockIndexService); + when(mockIndexService.getShard(testShardId.id())).thenReturn(primary); + + TransportService transportService = mock(TransportService.class); + when(transportService.getThreadPool()).thenReturn(threadPool); + } + + @Override + public void tearDown() throws Exception { + closeShards(primary, replica); + super.tearDown(); + } + + public void testPrepareAndSendSegments() throws IOException { + OngoingSegmentReplications replications = spy(new OngoingSegmentReplications(mockIndicesService, recoverySettings)); + final CheckpointInfoRequest request = new CheckpointInfoRequest( + 1L, + replica.routingEntry().allocationId().getId(), + replicaDiscoveryNode, + testCheckpoint + ); + final FileChunkWriter segmentSegmentFileChunkWriter = (fileMetadata, position, content, lastChunk, totalTranslogOps, listener) -> { + listener.onResponse(null); + }; + final CopyState copyState = replications.prepareForReplication(request, segmentSegmentFileChunkWriter); + assertTrue(replications.isInCopyStateMap(request.getCheckpoint())); + assertEquals(1, replications.size()); + assertEquals(1, copyState.refCount()); + + getSegmentFilesRequest = new GetSegmentFilesRequest( + 1L, + replica.routingEntry().allocationId().getId(), + replicaDiscoveryNode, + new ArrayList<>(copyState.getMetadataSnapshot().asMap().values()), + testCheckpoint + ); + + final Collection expectedFiles = List.copyOf(primary.store().getMetadata().asMap().values()); + replications.startSegmentCopy(getSegmentFilesRequest, new ActionListener<>() { + @Override + public void onResponse(GetSegmentFilesResponse getSegmentFilesResponse) { + assertEquals(1, getSegmentFilesResponse.files.size()); + assertEquals(1, expectedFiles.size()); + assertTrue(expectedFiles.stream().findFirst().get().isSame(getSegmentFilesResponse.files.get(0))); + assertEquals(0, copyState.refCount()); + assertFalse(replications.isInCopyStateMap(request.getCheckpoint())); + assertEquals(0, replications.size()); + } + + @Override + public void onFailure(Exception e) { + logger.error("Unexpected failure", e); + Assert.fail(); + } + }); + } + + public void testCancelReplication() throws IOException { + OngoingSegmentReplications replications = new OngoingSegmentReplications(mockIndicesService, recoverySettings); + final CheckpointInfoRequest request = new CheckpointInfoRequest( + 1L, + replica.routingEntry().allocationId().getId(), + primaryDiscoveryNode, + testCheckpoint + ); + final FileChunkWriter segmentSegmentFileChunkWriter = (fileMetadata, position, content, lastChunk, totalTranslogOps, listener) -> { + // this shouldn't be called in this test. + Assert.fail(); + }; + final CopyState copyState = replications.prepareForReplication(request, segmentSegmentFileChunkWriter); + assertEquals(1, replications.size()); + assertEquals(1, replications.cachedCopyStateSize()); + + replications.cancelReplication(primaryDiscoveryNode); + assertEquals(0, copyState.refCount()); + assertEquals(0, replications.size()); + assertEquals(0, replications.cachedCopyStateSize()); + } + + public void testMultipleReplicasUseSameCheckpoint() throws IOException { + OngoingSegmentReplications replications = new OngoingSegmentReplications(mockIndicesService, recoverySettings); + final CheckpointInfoRequest request = new CheckpointInfoRequest( + 1L, + replica.routingEntry().allocationId().getId(), + primaryDiscoveryNode, + testCheckpoint + ); + final FileChunkWriter segmentSegmentFileChunkWriter = (fileMetadata, position, content, lastChunk, totalTranslogOps, listener) -> { + // this shouldn't be called in this test. + Assert.fail(); + }; + + final CopyState copyState = replications.prepareForReplication(request, segmentSegmentFileChunkWriter); + assertEquals(1, copyState.refCount()); + + final CheckpointInfoRequest secondRequest = new CheckpointInfoRequest( + 1L, + replica.routingEntry().allocationId().getId(), + replicaDiscoveryNode, + testCheckpoint + ); + replications.prepareForReplication(secondRequest, segmentSegmentFileChunkWriter); + + assertEquals(2, copyState.refCount()); + assertEquals(2, replications.size()); + assertEquals(1, replications.cachedCopyStateSize()); + + replications.cancelReplication(primaryDiscoveryNode); + replications.cancelReplication(replicaDiscoveryNode); + assertEquals(0, copyState.refCount()); + assertEquals(0, replications.size()); + assertEquals(0, replications.cachedCopyStateSize()); + } + + public void testStartCopyWithoutPrepareStep() { + OngoingSegmentReplications replications = new OngoingSegmentReplications(mockIndicesService, recoverySettings); + final ActionListener listener = spy(new ActionListener<>() { + @Override + public void onResponse(GetSegmentFilesResponse getSegmentFilesResponse) { + assertTrue(getSegmentFilesResponse.files.isEmpty()); + } + + @Override + public void onFailure(Exception e) { + Assert.fail(); + } + }); + + getSegmentFilesRequest = new GetSegmentFilesRequest( + 1L, + replica.routingEntry().allocationId().getId(), + replicaDiscoveryNode, + Collections.emptyList(), + testCheckpoint + ); + + replications.startSegmentCopy(getSegmentFilesRequest, listener); + verify(listener, times(1)).onResponse(any()); + } + + public void testShardAlreadyReplicatingToNode() throws IOException { + OngoingSegmentReplications replications = spy(new OngoingSegmentReplications(mockIndicesService, recoverySettings)); + final CheckpointInfoRequest request = new CheckpointInfoRequest( + 1L, + replica.routingEntry().allocationId().getId(), + replicaDiscoveryNode, + testCheckpoint + ); + final FileChunkWriter segmentSegmentFileChunkWriter = (fileMetadata, position, content, lastChunk, totalTranslogOps, listener) -> { + listener.onResponse(null); + }; + replications.prepareForReplication(request, segmentSegmentFileChunkWriter); + assertThrows(OpenSearchException.class, () -> { replications.prepareForReplication(request, segmentSegmentFileChunkWriter); }); + } +} diff --git a/server/src/test/java/org/opensearch/indices/replication/SegmentFileTransferHandlerTests.java b/server/src/test/java/org/opensearch/indices/replication/SegmentFileTransferHandlerTests.java new file mode 100644 index 0000000000000..5fd8bc1e74625 --- /dev/null +++ b/server/src/test/java/org/opensearch/indices/replication/SegmentFileTransferHandlerTests.java @@ -0,0 +1,251 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.replication; + +import org.apache.lucene.index.CorruptIndexException; +import org.apache.lucene.index.IndexFileNames; +import org.junit.Assert; +import org.opensearch.Version; +import org.opensearch.action.ActionListener; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.common.bytes.BytesReference; +import org.opensearch.common.util.CancellableThreads; +import org.opensearch.core.internal.io.IOUtils; +import org.opensearch.index.shard.IndexShard; +import org.opensearch.index.shard.IndexShardTestCase; +import org.opensearch.index.store.Store; +import org.opensearch.index.store.StoreFileMetadata; +import org.opensearch.indices.recovery.FileChunkWriter; +import org.opensearch.indices.recovery.MultiChunkTransfer; + +import java.io.IOException; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import java.util.function.IntSupplier; + +import static java.util.Collections.emptyMap; +import static java.util.Collections.emptySet; +import static org.mockito.Mockito.*; + +public class SegmentFileTransferHandlerTests extends IndexShardTestCase { + + private IndexShard shard; + private StoreFileMetadata[] filesToSend; + private final DiscoveryNode targetNode = new DiscoveryNode( + "foo", + buildNewFakeTransportAddress(), + emptyMap(), + emptySet(), + Version.CURRENT + ); + + final int fileChunkSizeInBytes = 5000; + final int maxConcurrentFileChunks = 1; + private CancellableThreads cancellableThreads; + final IntSupplier translogOps = () -> 0; + + @Override + public void setUp() throws Exception { + super.setUp(); + cancellableThreads = new CancellableThreads(); + shard = spy(newStartedShard(true)); + filesToSend = getFilestoSend(shard); + // we should only have a Segments_N file at this point. + assertEquals(1, filesToSend.length); + } + + private StoreFileMetadata[] getFilestoSend(IndexShard shard) throws IOException { + final Store.MetadataSnapshot metadata = shard.store().getMetadata(); + return metadata.asMap().values().toArray(StoreFileMetadata[]::new); + } + + @Override + public void tearDown() throws Exception { + closeShards(shard); + super.tearDown(); + } + + public void testSendFiles_invokesChunkWriter() throws IOException, InterruptedException { + // use counDownLatch and countDown when our chunkWriter is invoked. + final CountDownLatch countDownLatch = new CountDownLatch(1); + final FileChunkWriter chunkWriter = spy(new FileChunkWriter() { + @Override + public void writeFileChunk( + StoreFileMetadata fileMetadata, + long position, + BytesReference content, + boolean lastChunk, + int totalTranslogOps, + ActionListener listener + ) { + assertTrue(filesToSend[0].isSame(fileMetadata)); + assertTrue(lastChunk); + countDownLatch.countDown(); + } + }); + + SegmentFileTransferHandler handler = new SegmentFileTransferHandler( + shard, + targetNode, + chunkWriter, + logger, + shard.getThreadPool(), + cancellableThreads, + fileChunkSizeInBytes, + maxConcurrentFileChunks + ); + final MultiChunkTransfer transfer = handler.createTransfer( + shard.store(), + filesToSend, + translogOps, + mock(ActionListener.class) + ); + + // start the transfer + transfer.start(); + countDownLatch.await(5, TimeUnit.SECONDS); + verify(chunkWriter, times(1)).writeFileChunk(any(), anyLong(), any(), anyBoolean(), anyInt(), any()); + IOUtils.close(transfer); + } + + public void testSendFiles_cancelThreads_beforeStart() throws IOException, InterruptedException { + final FileChunkWriter chunkWriter = spy(new FileChunkWriter() { + @Override + public void writeFileChunk( + StoreFileMetadata fileMetadata, + long position, + BytesReference content, + boolean lastChunk, + int totalTranslogOps, + ActionListener listener + ) { + Assert.fail(); + } + }); + SegmentFileTransferHandler handler = new SegmentFileTransferHandler( + shard, + targetNode, + chunkWriter, + logger, + shard.getThreadPool(), + cancellableThreads, + fileChunkSizeInBytes, + maxConcurrentFileChunks + ); + + final MultiChunkTransfer transfer = handler.createTransfer( + shard.store(), + filesToSend, + translogOps, + mock(ActionListener.class) + ); + + // start the transfer + cancellableThreads.cancel("test"); + transfer.start(); + verifyNoInteractions(chunkWriter); + IOUtils.close(transfer); + } + + public void testSendFiles_cancelThreads_afterStart() throws IOException, InterruptedException { + // index a doc a flush so we have more than 1 file to send. + indexDoc(shard, "_doc", "test"); + flushShard(shard, true); + filesToSend = getFilestoSend(shard); + + // we should have 4 files to send now - + // [_0.cfe, _0.si, _0.cfs, segments_3] + assertEquals(4, filesToSend.length); + + final CountDownLatch countDownLatch = new CountDownLatch(1); + FileChunkWriter chunkWriter = spy(new FileChunkWriter() { + @Override + public void writeFileChunk( + StoreFileMetadata fileMetadata, + long position, + BytesReference content, + boolean lastChunk, + int totalTranslogOps, + ActionListener listener + ) { + // cancel the threads at this point, we'll ensure this is not invoked more than once. + cancellableThreads.cancel("test"); + listener.onResponse(null); + } + }); + SegmentFileTransferHandler handler = new SegmentFileTransferHandler( + shard, + targetNode, + chunkWriter, + logger, + shard.getThreadPool(), + cancellableThreads, + fileChunkSizeInBytes, + maxConcurrentFileChunks + ); + + final MultiChunkTransfer transfer = handler.createTransfer( + shard.store(), + filesToSend, + translogOps, + new ActionListener() { + @Override + public void onResponse(Void unused) { + // do nothing here, we will just resolve in test. + } + + @Override + public void onFailure(Exception e) { + assertEquals(CancellableThreads.ExecutionCancelledException.class, e.getClass()); + countDownLatch.countDown(); + } + } + ); + + // start the transfer + transfer.start(); + countDownLatch.await(30, TimeUnit.SECONDS); + verify(chunkWriter, times(1)).writeFileChunk(any(), anyLong(), any(), anyBoolean(), anyInt(), any()); + IOUtils.close(transfer); + } + + public void testSendFiles_CorruptIndexException() throws Exception { + final CancellableThreads cancellableThreads = new CancellableThreads(); + SegmentFileTransferHandler handler = new SegmentFileTransferHandler( + shard, + targetNode, + mock(FileChunkWriter.class), + logger, + shard.getThreadPool(), + cancellableThreads, + fileChunkSizeInBytes, + maxConcurrentFileChunks + ); + final StoreFileMetadata SEGMENTS_FILE = new StoreFileMetadata( + IndexFileNames.SEGMENTS, + 1L, + "0", + org.apache.lucene.util.Version.LATEST + ); + + doNothing().when(shard).failShard(anyString(), any()); + assertThrows( + CorruptIndexException.class, + () -> { + handler.handleErrorOnSendFiles( + shard.store(), + new CorruptIndexException("test", "test"), + new StoreFileMetadata[] { SEGMENTS_FILE } + ); + } + ); + + verify(shard, times(1)).failShard(any(), any()); + } +} diff --git a/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationSourceHandlerTests.java b/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationSourceHandlerTests.java new file mode 100644 index 0000000000000..70061c54d0da2 --- /dev/null +++ b/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationSourceHandlerTests.java @@ -0,0 +1,193 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.replication; + +import org.hamcrest.MatcherAssert; +import org.hamcrest.Matchers; +import org.junit.Assert; +import org.mockito.Mockito; +import org.opensearch.OpenSearchException; +import org.opensearch.Version; +import org.opensearch.action.ActionListener; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.index.shard.IndexShard; +import org.opensearch.index.shard.IndexShardTestCase; +import org.opensearch.index.store.StoreFileMetadata; +import org.opensearch.indices.recovery.FileChunkWriter; +import org.opensearch.indices.replication.checkpoint.ReplicationCheckpoint; +import org.opensearch.indices.replication.common.CopyState; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; + +import static org.mockito.Mockito.mock; + +public class SegmentReplicationSourceHandlerTests extends IndexShardTestCase { + + private final DiscoveryNode localNode = new DiscoveryNode("local", buildNewFakeTransportAddress(), Version.CURRENT); + private DiscoveryNode replicaDiscoveryNode; + private IndexShard primary; + private IndexShard replica; + + private FileChunkWriter chunkWriter; + + @Override + public void setUp() throws Exception { + super.setUp(); + primary = newStartedShard(true); + replica = newShard(primary.shardId(), false); + recoverReplica(replica, primary, true); + replicaDiscoveryNode = replica.recoveryState().getTargetNode(); + } + + @Override + public void tearDown() throws Exception { + closeShards(primary, replica); + super.tearDown(); + } + + public void testSendFiles() throws IOException { + chunkWriter = (fileMetadata, position, content, lastChunk, totalTranslogOps, listener) -> listener.onResponse(null); + + final ReplicationCheckpoint latestReplicationCheckpoint = primary.getLatestReplicationCheckpoint(); + final CopyState copyState = new CopyState(latestReplicationCheckpoint, primary); + SegmentReplicationSourceHandler handler = new SegmentReplicationSourceHandler( + localNode, + chunkWriter, + threadPool, + copyState, + 5000, + 1 + ); + + final List expectedFiles = List.copyOf(copyState.getMetadataSnapshot().asMap().values()); + + final GetSegmentFilesRequest getSegmentFilesRequest = new GetSegmentFilesRequest( + 1L, + replica.routingEntry().allocationId().getId(), + replicaDiscoveryNode, + expectedFiles, + latestReplicationCheckpoint + ); + + handler.sendFiles(getSegmentFilesRequest, new ActionListener<>() { + @Override + public void onResponse(GetSegmentFilesResponse getSegmentFilesResponse) { + MatcherAssert.assertThat(getSegmentFilesResponse.files, Matchers.containsInAnyOrder(expectedFiles.toArray())); + } + + @Override + public void onFailure(Exception e) { + Assert.fail(); + } + }); + } + + public void testSendFiles_emptyRequest() throws IOException { + chunkWriter = mock(FileChunkWriter.class); + + final ReplicationCheckpoint latestReplicationCheckpoint = primary.getLatestReplicationCheckpoint(); + final CopyState copyState = new CopyState(latestReplicationCheckpoint, primary); + SegmentReplicationSourceHandler handler = new SegmentReplicationSourceHandler( + localNode, + chunkWriter, + threadPool, + copyState, + 5000, + 1 + ); + + final GetSegmentFilesRequest getSegmentFilesRequest = new GetSegmentFilesRequest( + 1L, + replica.routingEntry().allocationId().getId(), + replicaDiscoveryNode, + Collections.emptyList(), + latestReplicationCheckpoint + ); + + handler.sendFiles(getSegmentFilesRequest, new ActionListener<>() { + @Override + public void onResponse(GetSegmentFilesResponse getSegmentFilesResponse) { + assertTrue(getSegmentFilesResponse.files.isEmpty()); + Mockito.verifyNoInteractions(chunkWriter); + } + + @Override + public void onFailure(Exception e) { + Assert.fail(); + } + }); + } + + public void testSendFileFails() throws IOException { + chunkWriter = (fileMetadata, position, content, lastChunk, totalTranslogOps, listener) -> listener.onFailure( + new OpenSearchException("Test") + ); + + final ReplicationCheckpoint latestReplicationCheckpoint = primary.getLatestReplicationCheckpoint(); + final CopyState copyState = new CopyState(latestReplicationCheckpoint, primary); + SegmentReplicationSourceHandler handler = new SegmentReplicationSourceHandler( + localNode, + chunkWriter, + threadPool, + copyState, + 5000, + 1 + ); + + final List expectedFiles = List.copyOf(copyState.getMetadataSnapshot().asMap().values()); + + final GetSegmentFilesRequest getSegmentFilesRequest = new GetSegmentFilesRequest( + 1L, + replica.routingEntry().allocationId().getId(), + replicaDiscoveryNode, + expectedFiles, + latestReplicationCheckpoint + ); + + handler.sendFiles(getSegmentFilesRequest, new ActionListener<>() { + @Override + public void onResponse(GetSegmentFilesResponse getSegmentFilesResponse) { + Assert.fail(); + } + + @Override + public void onFailure(Exception e) { + assertEquals(e.getClass(), OpenSearchException.class); + } + }); + } + + public void testReplicationAlreadyRunning() throws IOException { + chunkWriter = mock(FileChunkWriter.class); + + final ReplicationCheckpoint latestReplicationCheckpoint = primary.getLatestReplicationCheckpoint(); + final CopyState copyState = new CopyState(latestReplicationCheckpoint, primary); + SegmentReplicationSourceHandler handler = new SegmentReplicationSourceHandler( + localNode, + chunkWriter, + threadPool, + copyState, + 5000, + 1 + ); + + final GetSegmentFilesRequest getSegmentFilesRequest = new GetSegmentFilesRequest( + 1L, + replica.routingEntry().allocationId().getId(), + replicaDiscoveryNode, + Collections.emptyList(), + latestReplicationCheckpoint + ); + + handler.sendFiles(getSegmentFilesRequest, mock(ActionListener.class)); + Assert.assertThrows(OpenSearchException.class, () -> { handler.sendFiles(getSegmentFilesRequest, mock(ActionListener.class)); }); + } +} diff --git a/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationSourceServiceTests.java b/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationSourceServiceTests.java index 67c867d360e70..8d2ca9ff63f3d 100644 --- a/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationSourceServiceTests.java +++ b/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationSourceServiceTests.java @@ -9,13 +9,16 @@ package org.opensearch.indices.replication; import org.opensearch.Version; +import org.opensearch.action.ActionListener; import org.opensearch.cluster.node.DiscoveryNode; import org.opensearch.common.io.stream.StreamInput; +import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Settings; import org.opensearch.index.IndexService; import org.opensearch.index.shard.IndexShard; import org.opensearch.index.shard.ShardId; import org.opensearch.indices.IndicesService; +import org.opensearch.indices.recovery.RecoverySettings; import org.opensearch.indices.replication.checkpoint.ReplicationCheckpoint; import org.opensearch.indices.replication.common.CopyStateTests; import org.opensearch.test.OpenSearchTestCase; @@ -30,30 +33,23 @@ import java.util.Collections; import java.util.concurrent.TimeUnit; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.when; +import static org.mockito.Mockito.*; public class SegmentReplicationSourceServiceTests extends OpenSearchTestCase { - private ShardId testShardId; private ReplicationCheckpoint testCheckpoint; - private IndicesService mockIndicesService; - private IndexService mockIndexService; - private IndexShard mockIndexShard; private TestThreadPool testThreadPool; - private CapturingTransport transport; private TransportService transportService; private DiscoveryNode localNode; - private SegmentReplicationSourceService segmentReplicationSourceService; @Override public void setUp() throws Exception { super.setUp(); // setup mocks - mockIndexShard = CopyStateTests.createMockIndexShard(); - testShardId = mockIndexShard.shardId(); - mockIndicesService = mock(IndicesService.class); - mockIndexService = mock(IndexService.class); + IndexShard mockIndexShard = CopyStateTests.createMockIndexShard(); + ShardId testShardId = mockIndexShard.shardId(); + IndicesService mockIndicesService = mock(IndicesService.class); + IndexService mockIndexService = mock(IndexService.class); when(mockIndicesService.indexService(testShardId.getIndex())).thenReturn(mockIndexService); when(mockIndexService.getShard(testShardId.id())).thenReturn(mockIndexShard); @@ -66,7 +62,7 @@ public void setUp() throws Exception { 0L ); testThreadPool = new TestThreadPool("test", Settings.EMPTY); - transport = new CapturingTransport(); + CapturingTransport transport = new CapturingTransport(); localNode = new DiscoveryNode("local", buildNewFakeTransportAddress(), Version.CURRENT); transportService = transport.createTransportService( Settings.EMPTY, @@ -78,7 +74,16 @@ public void setUp() throws Exception { ); transportService.start(); transportService.acceptIncomingRequests(); - segmentReplicationSourceService = new SegmentReplicationSourceService(transportService, mockIndicesService); + + final Settings settings = Settings.builder().put("node.name", SegmentReplicationTargetServiceTests.class.getSimpleName()).build(); + final ClusterSettings clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + final RecoverySettings recoverySettings = new RecoverySettings(settings, clusterSettings); + + SegmentReplicationSourceService segmentReplicationSourceService = new SegmentReplicationSourceService( + mockIndicesService, + transportService, + recoverySettings + ); } @Override @@ -88,7 +93,7 @@ public void tearDown() throws Exception { super.tearDown(); } - public void testGetSegmentFiles_EmptyResponse() { + public void testGetSegmentFiles() { final GetSegmentFilesRequest request = new GetSegmentFilesRequest( 1, "allocationId", @@ -96,19 +101,52 @@ public void testGetSegmentFiles_EmptyResponse() { Collections.emptyList(), testCheckpoint ); + executeGetSegmentFiles(request, new ActionListener<>() { + @Override + public void onResponse(GetSegmentFilesResponse response) { + assertEquals(0, response.files.size()); + } + + @Override + public void onFailure(Exception e) { + fail("unexpected exception: " + e); + } + }); + } + + public void testCheckpointInfo() { + executeGetCheckpointInfo(new ActionListener<>() { + @Override + public void onResponse(CheckpointInfoResponse response) { + assertEquals(testCheckpoint, response.getCheckpoint()); + assertNotNull(response.getInfosBytes()); + // CopyStateTests sets up one pending delete file and one committed segments file + assertEquals(1, response.getPendingDeleteFiles().size()); + assertEquals(1, response.getSnapshot().size()); + } + + @Override + public void onFailure(Exception e) { + fail("unexpected exception: " + e); + } + }); + } + + private void executeGetCheckpointInfo(ActionListener listener) { + final CheckpointInfoRequest request = new CheckpointInfoRequest(1L, "testAllocationId", localNode, testCheckpoint); transportService.sendRequest( localNode, - SegmentReplicationSourceService.Actions.GET_SEGMENT_FILES, + SegmentReplicationSourceService.Actions.GET_CHECKPOINT_INFO, request, - new TransportResponseHandler() { + new TransportResponseHandler() { @Override - public void handleResponse(GetSegmentFilesResponse response) { - assertEquals(0, response.files.size()); + public void handleResponse(CheckpointInfoResponse response) { + listener.onResponse(response); } @Override public void handleException(TransportException e) { - fail("unexpected exception: " + e); + listener.onFailure(e); } @Override @@ -117,32 +155,27 @@ public String executor() { } @Override - public GetSegmentFilesResponse read(StreamInput in) throws IOException { - return new GetSegmentFilesResponse(in); + public CheckpointInfoResponse read(StreamInput in) throws IOException { + return new CheckpointInfoResponse(in); } } ); } - public void testCheckpointInfo() { - final CheckpointInfoRequest request = new CheckpointInfoRequest(1L, "testAllocationId", localNode, testCheckpoint); + private void executeGetSegmentFiles(GetSegmentFilesRequest request, ActionListener listener) { transportService.sendRequest( localNode, - SegmentReplicationSourceService.Actions.GET_CHECKPOINT_INFO, + SegmentReplicationSourceService.Actions.GET_SEGMENT_FILES, request, - new TransportResponseHandler() { + new TransportResponseHandler() { @Override - public void handleResponse(CheckpointInfoResponse response) { - assertEquals(testCheckpoint, response.getCheckpoint()); - assertNotNull(response.getInfosBytes()); - // CopyStateTests sets up one pending delete file and one committed segments file - assertEquals(1, response.getPendingDeleteFiles().size()); - assertEquals(1, response.getSnapshot().size()); + public void handleResponse(GetSegmentFilesResponse response) { + listener.onResponse(response); } @Override public void handleException(TransportException e) { - fail("unexpected exception: " + e); + listener.onFailure(e); } @Override @@ -151,11 +184,10 @@ public String executor() { } @Override - public CheckpointInfoResponse read(StreamInput in) throws IOException { - return new CheckpointInfoResponse(in); + public GetSegmentFilesResponse read(StreamInput in) throws IOException { + return new GetSegmentFilesResponse(in); } } ); } - } diff --git a/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationTargetServiceTests.java b/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationTargetServiceTests.java index aa17dec5767da..33734fe85def5 100644 --- a/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationTargetServiceTests.java +++ b/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationTargetServiceTests.java @@ -9,6 +9,7 @@ package org.opensearch.indices.replication; import org.junit.Assert; +import org.mockito.ArgumentCaptor; import org.mockito.Mockito; import org.opensearch.OpenSearchException; import org.opensearch.action.ActionListener; @@ -18,6 +19,7 @@ import org.opensearch.index.shard.IndexShardTestCase; import org.opensearch.indices.recovery.RecoverySettings; import org.opensearch.indices.replication.checkpoint.ReplicationCheckpoint; +import org.opensearch.indices.replication.common.ReplicationLuceneIndex; import org.opensearch.transport.TransportService; import java.io.IOException; @@ -39,7 +41,7 @@ public void setUp() throws Exception { final ClusterSettings clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); final RecoverySettings recoverySettings = new RecoverySettings(settings, clusterSettings); final TransportService transportService = mock(TransportService.class); - indexShard = newShard(false, settings); + indexShard = newStartedShard(false, settings); checkpoint = new ReplicationCheckpoint(indexShard.shardId(), 0L, 0L, 0L, 0L); SegmentReplicationSourceFactory replicationSourceFactory = mock(SegmentReplicationSourceFactory.class); replicationSource = mock(SegmentReplicationSource.class); @@ -54,7 +56,7 @@ public void tearDown() throws Exception { super.tearDown(); } - public void testTargetReturnsSuccess_listenerCompletes() throws IOException { + public void testTargetReturnsSuccess_listenerCompletes() { final SegmentReplicationTarget target = new SegmentReplicationTarget( checkpoint, indexShard, @@ -73,15 +75,16 @@ public void onReplicationFailure(SegmentReplicationState state, OpenSearchExcept ); final SegmentReplicationTarget spy = Mockito.spy(target); doAnswer(invocation -> { + // setting stage to REPLICATING so transition in markAsDone succeeds on listener completion + target.state().setStage(SegmentReplicationState.Stage.REPLICATING); final ActionListener listener = invocation.getArgument(0); listener.onResponse(null); return null; }).when(spy).startReplication(any()); sut.startReplication(spy); - closeShards(indexShard); } - public void testTargetThrowsException() throws IOException { + public void testTargetThrowsException() { final OpenSearchException expectedError = new OpenSearchException("Fail"); final SegmentReplicationTarget target = new SegmentReplicationTarget( checkpoint, @@ -95,7 +98,7 @@ public void onReplicationDone(SegmentReplicationState state) { @Override public void onReplicationFailure(SegmentReplicationState state, OpenSearchException e, boolean sendShardFailure) { - assertEquals(SegmentReplicationState.Stage.INIT, state.getStage()); + assertEquals(SegmentReplicationState.Stage.REPLICATING, state.getStage()); assertEquals(expectedError, e.getCause()); assertTrue(sendShardFailure); } @@ -103,15 +106,78 @@ public void onReplicationFailure(SegmentReplicationState state, OpenSearchExcept ); final SegmentReplicationTarget spy = Mockito.spy(target); doAnswer(invocation -> { + // setting stage to REPLICATING so transition in markAsDone succeeds on listener completion + target.state().setStage(SegmentReplicationState.Stage.REPLICATING); final ActionListener listener = invocation.getArgument(0); listener.onFailure(expectedError); return null; }).when(spy).startReplication(any()); sut.startReplication(spy); - closeShards(indexShard); } - public void testBeforeIndexShardClosed_CancelsOngoingReplications() throws IOException { + public void testAlreadyOnNewCheckpoint() { + SegmentReplicationTargetService spy = spy(sut); + spy.onNewCheckpoint(indexShard.getLatestReplicationCheckpoint(), indexShard); + verify(spy, times(0)).startReplication(any(), any(), any()); + } + + public void testShardAlreadyReplicating() { + SegmentReplicationTargetService spy = spy(sut); + // Create a separate target and start it so the shard is already replicating. + final SegmentReplicationTarget target = new SegmentReplicationTarget( + checkpoint, + indexShard, + replicationSource, + mock(SegmentReplicationTargetService.SegmentReplicationListener.class) + ); + final SegmentReplicationTarget spyTarget = Mockito.spy(target); + spy.startReplication(spyTarget); + + // a new checkpoint comes in for the same IndexShard. + spy.onNewCheckpoint(checkpoint, indexShard); + verify(spy, times(0)).startReplication(any(), any(), any()); + spyTarget.markAsDone(); + } + + public void testNewCheckpointBehindCurrentCheckpoint() { + SegmentReplicationTargetService spy = spy(sut); + spy.onNewCheckpoint(checkpoint, indexShard); + verify(spy, times(0)).startReplication(any(), any(), any()); + } + + public void testShardNotStarted() throws IOException { + SegmentReplicationTargetService spy = spy(sut); + IndexShard shard = newShard(false); + spy.onNewCheckpoint(checkpoint, shard); + verify(spy, times(0)).startReplication(any(), any(), any()); + closeShards(shard); + } + + public void testNewCheckpoint_validationPassesAndReplicationFails() throws IOException { + allowShardFailures(); + SegmentReplicationTargetService spy = spy(sut); + IndexShard spyShard = spy(indexShard); + ReplicationCheckpoint cp = indexShard.getLatestReplicationCheckpoint(); + ReplicationCheckpoint newCheckpoint = new ReplicationCheckpoint( + cp.getShardId(), + cp.getPrimaryTerm(), + cp.getSegmentsGen(), + cp.getSeqNo(), + cp.getSegmentInfosVersion() + 1 + ); + ArgumentCaptor captor = ArgumentCaptor.forClass( + SegmentReplicationTargetService.SegmentReplicationListener.class + ); + doNothing().when(spy).startReplication(any(), any(), any()); + spy.onNewCheckpoint(newCheckpoint, spyShard); + verify(spy, times(1)).startReplication(any(), any(), captor.capture()); + SegmentReplicationTargetService.SegmentReplicationListener listener = captor.getValue(); + listener.onFailure(new SegmentReplicationState(new ReplicationLuceneIndex()), new OpenSearchException("testing"), true); + verify(spyShard).failShard(any(), any()); + closeShard(indexShard, false); + } + + public void testBeforeIndexShardClosed_CancelsOngoingReplications() { final SegmentReplicationTarget target = new SegmentReplicationTarget( checkpoint, indexShard, @@ -121,7 +187,6 @@ public void testBeforeIndexShardClosed_CancelsOngoingReplications() throws IOExc final SegmentReplicationTarget spy = Mockito.spy(target); sut.startReplication(spy); sut.beforeIndexShardClosed(indexShard.shardId(), indexShard, Settings.EMPTY); - Mockito.verify(spy, times(1)).cancel(any()); - closeShards(indexShard); + verify(spy, times(1)).cancel(any()); } } diff --git a/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationTargetTests.java b/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationTargetTests.java new file mode 100644 index 0000000000000..a0944ee249859 --- /dev/null +++ b/server/src/test/java/org/opensearch/indices/replication/SegmentReplicationTargetTests.java @@ -0,0 +1,370 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.replication; + +import org.apache.lucene.index.IndexFileNames; +import org.apache.lucene.index.IndexFormatTooNewException; +import org.apache.lucene.index.SegmentInfos; +import org.apache.lucene.store.ByteBuffersDataOutput; +import org.apache.lucene.store.ByteBuffersIndexOutput; +import org.apache.lucene.util.Version; +import org.junit.Assert; +import org.mockito.Mockito; +import org.opensearch.action.ActionListener; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.common.settings.Settings; +import org.opensearch.index.engine.NRTReplicationEngineFactory; +import org.opensearch.index.shard.IndexShard; +import org.opensearch.index.shard.IndexShardTestCase; +import org.opensearch.index.store.Store; +import org.opensearch.index.store.StoreFileMetadata; +import org.opensearch.indices.replication.checkpoint.ReplicationCheckpoint; +import org.opensearch.indices.replication.common.ReplicationType; + +import java.io.IOException; +import java.util.List; +import java.util.Map; +import java.util.Set; + +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyLong; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class SegmentReplicationTargetTests extends IndexShardTestCase { + + private SegmentReplicationTarget segrepTarget; + private IndexShard indexShard, spyIndexShard; + private ReplicationCheckpoint repCheckpoint; + private ByteBuffersDataOutput buffer; + + private static final StoreFileMetadata SEGMENTS_FILE = new StoreFileMetadata(IndexFileNames.SEGMENTS, 1L, "0", Version.LATEST); + private static final StoreFileMetadata SEGMENTS_FILE_DIFF = new StoreFileMetadata( + IndexFileNames.SEGMENTS, + 5L, + "different", + Version.LATEST + ); + private static final StoreFileMetadata PENDING_DELETE_FILE = new StoreFileMetadata("pendingDelete.del", 1L, "1", Version.LATEST); + + private static final Store.MetadataSnapshot SI_SNAPSHOT = new Store.MetadataSnapshot( + Map.of(SEGMENTS_FILE.name(), SEGMENTS_FILE), + null, + 0 + ); + + private static final Store.MetadataSnapshot SI_SNAPSHOT_DIFFERENT = new Store.MetadataSnapshot( + Map.of(SEGMENTS_FILE_DIFF.name(), SEGMENTS_FILE_DIFF), + null, + 0 + ); + + SegmentInfos testSegmentInfos; + + @Override + public void setUp() throws Exception { + + super.setUp(); + Settings indexSettings = Settings.builder() + .put(IndexMetadata.SETTING_VERSION_CREATED, org.opensearch.Version.CURRENT) + .put(IndexMetadata.SETTING_REPLICATION_TYPE, ReplicationType.SEGMENT) + .build(); + + indexShard = newStartedShard(false, indexSettings, new NRTReplicationEngineFactory()); + spyIndexShard = spy(indexShard); + + Mockito.doNothing().when(spyIndexShard).finalizeReplication(any(SegmentInfos.class), anyLong()); + testSegmentInfos = spyIndexShard.store().readLastCommittedSegmentsInfo(); + buffer = new ByteBuffersDataOutput(); + try (ByteBuffersIndexOutput indexOutput = new ByteBuffersIndexOutput(buffer, "", null)) { + testSegmentInfos.write(indexOutput); + } + repCheckpoint = new ReplicationCheckpoint( + spyIndexShard.shardId(), + spyIndexShard.getPendingPrimaryTerm(), + testSegmentInfos.getGeneration(), + spyIndexShard.seqNoStats().getLocalCheckpoint(), + testSegmentInfos.version + ); + } + + public void testSuccessfulResponse_startReplication() { + + SegmentReplicationSource segrepSource = new SegmentReplicationSource() { + @Override + public void getCheckpointMetadata( + long replicationId, + ReplicationCheckpoint checkpoint, + ActionListener listener + ) { + listener.onResponse(new CheckpointInfoResponse(checkpoint, SI_SNAPSHOT, buffer.toArrayCopy(), Set.of(PENDING_DELETE_FILE))); + } + + @Override + public void getSegmentFiles( + long replicationId, + ReplicationCheckpoint checkpoint, + List filesToFetch, + Store store, + ActionListener listener + ) { + assertEquals(filesToFetch.size(), 2); + assert (filesToFetch.contains(SEGMENTS_FILE)); + assert (filesToFetch.contains(PENDING_DELETE_FILE)); + listener.onResponse(new GetSegmentFilesResponse(filesToFetch)); + } + }; + + SegmentReplicationTargetService.SegmentReplicationListener segRepListener = mock( + SegmentReplicationTargetService.SegmentReplicationListener.class + ); + segrepTarget = new SegmentReplicationTarget(repCheckpoint, spyIndexShard, segrepSource, segRepListener); + + segrepTarget.startReplication(new ActionListener() { + @Override + public void onResponse(Void replicationResponse) { + try { + verify(spyIndexShard, times(1)).finalizeReplication(any(), anyLong()); + } catch (IOException ex) { + Assert.fail(); + } + } + + @Override + public void onFailure(Exception e) { + logger.error("Unexpected test error", e); + Assert.fail(); + } + }); + } + + public void testFailureResponse_getCheckpointMetadata() { + + Exception exception = new Exception("dummy failure"); + SegmentReplicationSource segrepSource = new SegmentReplicationSource() { + @Override + public void getCheckpointMetadata( + long replicationId, + ReplicationCheckpoint checkpoint, + ActionListener listener + ) { + listener.onFailure(exception); + } + + @Override + public void getSegmentFiles( + long replicationId, + ReplicationCheckpoint checkpoint, + List filesToFetch, + Store store, + ActionListener listener + ) { + listener.onResponse(new GetSegmentFilesResponse(filesToFetch)); + } + }; + SegmentReplicationTargetService.SegmentReplicationListener segRepListener = mock( + SegmentReplicationTargetService.SegmentReplicationListener.class + ); + segrepTarget = new SegmentReplicationTarget(repCheckpoint, spyIndexShard, segrepSource, segRepListener); + + segrepTarget.startReplication(new ActionListener() { + @Override + public void onResponse(Void replicationResponse) { + Assert.fail(); + } + + @Override + public void onFailure(Exception e) { + assertEquals(exception, e.getCause().getCause()); + } + }); + } + + public void testFailureResponse_getSegmentFiles() { + + Exception exception = new Exception("dummy failure"); + SegmentReplicationSource segrepSource = new SegmentReplicationSource() { + @Override + public void getCheckpointMetadata( + long replicationId, + ReplicationCheckpoint checkpoint, + ActionListener listener + ) { + listener.onResponse(new CheckpointInfoResponse(checkpoint, SI_SNAPSHOT, buffer.toArrayCopy(), Set.of(PENDING_DELETE_FILE))); + } + + @Override + public void getSegmentFiles( + long replicationId, + ReplicationCheckpoint checkpoint, + List filesToFetch, + Store store, + ActionListener listener + ) { + listener.onFailure(exception); + } + }; + SegmentReplicationTargetService.SegmentReplicationListener segRepListener = mock( + SegmentReplicationTargetService.SegmentReplicationListener.class + ); + segrepTarget = new SegmentReplicationTarget(repCheckpoint, spyIndexShard, segrepSource, segRepListener); + + segrepTarget.startReplication(new ActionListener() { + @Override + public void onResponse(Void replicationResponse) { + Assert.fail(); + } + + @Override + public void onFailure(Exception e) { + assertEquals(exception, e.getCause().getCause()); + } + }); + } + + public void testFailure_finalizeReplication_IOException() throws IOException { + + IOException exception = new IOException("dummy failure"); + SegmentReplicationSource segrepSource = new SegmentReplicationSource() { + @Override + public void getCheckpointMetadata( + long replicationId, + ReplicationCheckpoint checkpoint, + ActionListener listener + ) { + listener.onResponse(new CheckpointInfoResponse(checkpoint, SI_SNAPSHOT, buffer.toArrayCopy(), Set.of(PENDING_DELETE_FILE))); + } + + @Override + public void getSegmentFiles( + long replicationId, + ReplicationCheckpoint checkpoint, + List filesToFetch, + Store store, + ActionListener listener + ) { + listener.onResponse(new GetSegmentFilesResponse(filesToFetch)); + } + }; + SegmentReplicationTargetService.SegmentReplicationListener segRepListener = mock( + SegmentReplicationTargetService.SegmentReplicationListener.class + ); + segrepTarget = new SegmentReplicationTarget(repCheckpoint, spyIndexShard, segrepSource, segRepListener); + + doThrow(exception).when(spyIndexShard).finalizeReplication(any(), anyLong()); + + segrepTarget.startReplication(new ActionListener() { + @Override + public void onResponse(Void replicationResponse) { + Assert.fail(); + } + + @Override + public void onFailure(Exception e) { + assertEquals(exception, e.getCause()); + } + }); + } + + public void testFailure_finalizeReplication_IndexFormatException() throws IOException { + + IndexFormatTooNewException exception = new IndexFormatTooNewException("string", 1, 2, 1); + SegmentReplicationSource segrepSource = new SegmentReplicationSource() { + @Override + public void getCheckpointMetadata( + long replicationId, + ReplicationCheckpoint checkpoint, + ActionListener listener + ) { + listener.onResponse(new CheckpointInfoResponse(checkpoint, SI_SNAPSHOT, buffer.toArrayCopy(), Set.of(PENDING_DELETE_FILE))); + } + + @Override + public void getSegmentFiles( + long replicationId, + ReplicationCheckpoint checkpoint, + List filesToFetch, + Store store, + ActionListener listener + ) { + listener.onResponse(new GetSegmentFilesResponse(filesToFetch)); + } + }; + SegmentReplicationTargetService.SegmentReplicationListener segRepListener = mock( + SegmentReplicationTargetService.SegmentReplicationListener.class + ); + segrepTarget = new SegmentReplicationTarget(repCheckpoint, spyIndexShard, segrepSource, segRepListener); + + doThrow(exception).when(spyIndexShard).finalizeReplication(any(), anyLong()); + + segrepTarget.startReplication(new ActionListener() { + @Override + public void onResponse(Void replicationResponse) { + Assert.fail(); + } + + @Override + public void onFailure(Exception e) { + assertEquals(exception, e.getCause()); + } + }); + } + + public void testFailure_differentSegmentFiles() throws IOException { + + SegmentReplicationSource segrepSource = new SegmentReplicationSource() { + @Override + public void getCheckpointMetadata( + long replicationId, + ReplicationCheckpoint checkpoint, + ActionListener listener + ) { + listener.onResponse(new CheckpointInfoResponse(checkpoint, SI_SNAPSHOT, buffer.toArrayCopy(), Set.of(PENDING_DELETE_FILE))); + } + + @Override + public void getSegmentFiles( + long replicationId, + ReplicationCheckpoint checkpoint, + List filesToFetch, + Store store, + ActionListener listener + ) { + listener.onResponse(new GetSegmentFilesResponse(filesToFetch)); + } + }; + SegmentReplicationTargetService.SegmentReplicationListener segRepListener = mock( + SegmentReplicationTargetService.SegmentReplicationListener.class + ); + segrepTarget = spy(new SegmentReplicationTarget(repCheckpoint, indexShard, segrepSource, segRepListener)); + when(segrepTarget.getMetadataSnapshot()).thenReturn(SI_SNAPSHOT_DIFFERENT); + segrepTarget.startReplication(new ActionListener() { + @Override + public void onResponse(Void replicationResponse) { + Assert.fail(); + } + + @Override + public void onFailure(Exception e) { + assert (e instanceof IllegalStateException); + } + }); + } + + @Override + public void tearDown() throws Exception { + super.tearDown(); + segrepTarget.markAsDone(); + closeShards(spyIndexShard, indexShard); + } +} diff --git a/server/src/test/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointActionTests.java b/server/src/test/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointActionTests.java index 074b5ff613b08..77cc1d744f0dc 100644 --- a/server/src/test/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointActionTests.java +++ b/server/src/test/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointActionTests.java @@ -22,7 +22,7 @@ import org.opensearch.index.shard.IndexShard; import org.opensearch.index.shard.ShardId; import org.opensearch.indices.IndicesService; -import org.opensearch.indices.recovery.RecoverySettings; +import org.opensearch.indices.replication.SegmentReplicationTargetService; import org.opensearch.test.OpenSearchTestCase; import org.opensearch.test.transport.CapturingTransport; import org.opensearch.threadpool.TestThreadPool; @@ -73,7 +73,7 @@ public void tearDown() throws Exception { super.tearDown(); } - public void testPublishCheckpointActionOnPrimary() throws InterruptedException { + public void testPublishCheckpointActionOnPrimary() { final IndicesService indicesService = mock(IndicesService.class); final Index index = new Index("index", "uuid"); @@ -87,7 +87,7 @@ public void testPublishCheckpointActionOnPrimary() throws InterruptedException { final ShardId shardId = new ShardId(index, id); when(indexShard.shardId()).thenReturn(shardId); - final RecoverySettings recoverySettings = new RecoverySettings(Settings.EMPTY, clusterService.getClusterSettings()); + final SegmentReplicationTargetService mockTargetService = mock(SegmentReplicationTargetService.class); final PublishCheckpointAction action = new PublishCheckpointAction( Settings.EMPTY, @@ -96,7 +96,8 @@ public void testPublishCheckpointActionOnPrimary() throws InterruptedException { indicesService, threadPool, shardStateAction, - new ActionFilters(Collections.emptySet()) + new ActionFilters(Collections.emptySet()), + mockTargetService ); final ReplicationCheckpoint checkpoint = new ReplicationCheckpoint(indexShard.shardId(), 1111, 111, 11, 1); @@ -116,7 +117,6 @@ public void testPublishCheckpointActionOnReplica() { final Index index = new Index("index", "uuid"); final IndexService indexService = mock(IndexService.class); when(indicesService.indexServiceSafe(index)).thenReturn(indexService); - final int id = randomIntBetween(0, 4); final IndexShard indexShard = mock(IndexShard.class); when(indexService.getShard(id)).thenReturn(indexShard); @@ -124,7 +124,7 @@ public void testPublishCheckpointActionOnReplica() { final ShardId shardId = new ShardId(index, id); when(indexShard.shardId()).thenReturn(shardId); - final RecoverySettings recoverySettings = new RecoverySettings(Settings.EMPTY, clusterService.getClusterSettings()); + final SegmentReplicationTargetService mockTargetService = mock(SegmentReplicationTargetService.class); final PublishCheckpointAction action = new PublishCheckpointAction( Settings.EMPTY, @@ -133,7 +133,8 @@ public void testPublishCheckpointActionOnReplica() { indicesService, threadPool, shardStateAction, - new ActionFilters(Collections.emptySet()) + new ActionFilters(Collections.emptySet()), + mockTargetService ); final ReplicationCheckpoint checkpoint = new ReplicationCheckpoint(indexShard.shardId(), 1111, 111, 11, 1); @@ -145,7 +146,7 @@ public void testPublishCheckpointActionOnReplica() { final TransportReplicationAction.ReplicaResult result = listener.actionGet(); // onNewCheckpoint should be called on shard with checkpoint request - verify(indexShard).onNewCheckpoint(request); + verify(mockTargetService, times(1)).onNewCheckpoint(checkpoint, indexShard); // the result should indicate success final AtomicBoolean success = new AtomicBoolean(); diff --git a/server/src/test/java/org/opensearch/indices/replication/common/CopyStateTests.java b/server/src/test/java/org/opensearch/indices/replication/common/CopyStateTests.java index afa38afb0cf2f..a6f0cf7e98411 100644 --- a/server/src/test/java/org/opensearch/indices/replication/common/CopyStateTests.java +++ b/server/src/test/java/org/opensearch/indices/replication/common/CopyStateTests.java @@ -47,7 +47,15 @@ public class CopyStateTests extends IndexShardTestCase { ); public void testCopyStateCreation() throws IOException { - CopyState copyState = new CopyState(createMockIndexShard()); + final IndexShard mockIndexShard = createMockIndexShard(); + ReplicationCheckpoint testCheckpoint = new ReplicationCheckpoint( + mockIndexShard.shardId(), + mockIndexShard.getOperationPrimaryTerm(), + 0L, + mockIndexShard.getProcessedLocalCheckpoint(), + 0L + ); + CopyState copyState = new CopyState(testCheckpoint, mockIndexShard); ReplicationCheckpoint checkpoint = copyState.getCheckpoint(); assertEquals(TEST_SHARD_ID, checkpoint.getShardId()); // version was never set so this should be zero diff --git a/server/src/test/java/org/opensearch/indices/settings/InternalOrPrivateSettingsPlugin.java b/server/src/test/java/org/opensearch/indices/settings/InternalOrPrivateSettingsPlugin.java index 775b4bb185881..2e244908dc4eb 100644 --- a/server/src/test/java/org/opensearch/indices/settings/InternalOrPrivateSettingsPlugin.java +++ b/server/src/test/java/org/opensearch/indices/settings/InternalOrPrivateSettingsPlugin.java @@ -173,7 +173,7 @@ protected UpdateInternalOrPrivateAction.Response read(StreamInput in) throws IOE } @Override - protected void masterOperation( + protected void clusterManagerOperation( final UpdateInternalOrPrivateAction.Request request, final ClusterState state, final ActionListener listener diff --git a/server/src/test/java/org/opensearch/repositories/blobstore/BlobStoreRepositoryTests.java b/server/src/test/java/org/opensearch/repositories/blobstore/BlobStoreRepositoryTests.java index 5ce970e0633d2..14f9a46169fbb 100644 --- a/server/src/test/java/org/opensearch/repositories/blobstore/BlobStoreRepositoryTests.java +++ b/server/src/test/java/org/opensearch/repositories/blobstore/BlobStoreRepositoryTests.java @@ -35,7 +35,7 @@ import org.opensearch.Version; import org.opensearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse; import org.opensearch.action.support.PlainActionFuture; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.cluster.metadata.RepositoryMetadata; import org.opensearch.cluster.service.ClusterService; diff --git a/server/src/test/java/org/opensearch/rest/action/admin/cluster/RestClusterHealthActionTests.java b/server/src/test/java/org/opensearch/rest/action/admin/cluster/RestClusterHealthActionTests.java index 8334a1e88190a..975a4d8120965 100644 --- a/server/src/test/java/org/opensearch/rest/action/admin/cluster/RestClusterHealthActionTests.java +++ b/server/src/test/java/org/opensearch/rest/action/admin/cluster/RestClusterHealthActionTests.java @@ -81,7 +81,7 @@ public void testFromRequest() { assertThat(clusterHealthRequest.indices().length, equalTo(1)); assertThat(clusterHealthRequest.indices()[0], equalTo(index)); assertThat(clusterHealthRequest.local(), equalTo(local)); - assertThat(clusterHealthRequest.masterNodeTimeout(), equalTo(TimeValue.parseTimeValue(clusterManagerTimeout, "test"))); + assertThat(clusterHealthRequest.clusterManagerNodeTimeout(), equalTo(TimeValue.parseTimeValue(clusterManagerTimeout, "test"))); assertThat(clusterHealthRequest.timeout(), equalTo(TimeValue.parseTimeValue(timeout, "test"))); assertThat(clusterHealthRequest.waitForStatus(), equalTo(waitForStatus)); assertThat(clusterHealthRequest.waitForNoRelocatingShards(), equalTo(waitForNoRelocatingShards)); diff --git a/server/src/test/java/org/opensearch/search/aggregations/metrics/InternalTDigestPercentilesRanksTests.java b/server/src/test/java/org/opensearch/search/aggregations/metrics/InternalTDigestPercentilesRanksTests.java index aea81fd5d1c78..78296eddbdc2c 100644 --- a/server/src/test/java/org/opensearch/search/aggregations/metrics/InternalTDigestPercentilesRanksTests.java +++ b/server/src/test/java/org/opensearch/search/aggregations/metrics/InternalTDigestPercentilesRanksTests.java @@ -53,7 +53,8 @@ protected InternalTDigestPercentileRanks createTestInstance( final TDigestState state = new TDigestState(100); Arrays.stream(values).forEach(state::add); - assertEquals(state.centroidCount(), values.length); + // the number of centroids is defined as <= the number of samples inserted + assertTrue(state.centroidCount() <= values.length); return new InternalTDigestPercentileRanks(name, percents, state, keyed, format, metadata); } diff --git a/server/src/test/java/org/opensearch/search/aggregations/metrics/InternalTDigestPercentilesTests.java b/server/src/test/java/org/opensearch/search/aggregations/metrics/InternalTDigestPercentilesTests.java index 4d88f8fecd709..101583f1f37c9 100644 --- a/server/src/test/java/org/opensearch/search/aggregations/metrics/InternalTDigestPercentilesTests.java +++ b/server/src/test/java/org/opensearch/search/aggregations/metrics/InternalTDigestPercentilesTests.java @@ -53,7 +53,8 @@ protected InternalTDigestPercentiles createTestInstance( final TDigestState state = new TDigestState(100); Arrays.stream(values).forEach(state::add); - assertEquals(state.centroidCount(), values.length); + // the number of centroids is defined as <= the number of samples inserted + assertTrue(state.centroidCount() <= values.length); return new InternalTDigestPercentiles(name, percents, state, keyed, format, metadata); } diff --git a/server/src/test/java/org/opensearch/search/aggregations/metrics/TDigestPercentilesAggregatorTests.java b/server/src/test/java/org/opensearch/search/aggregations/metrics/TDigestPercentilesAggregatorTests.java index 50415dc10df7e..fd98a090367b2 100644 --- a/server/src/test/java/org/opensearch/search/aggregations/metrics/TDigestPercentilesAggregatorTests.java +++ b/server/src/test/java/org/opensearch/search/aggregations/metrics/TDigestPercentilesAggregatorTests.java @@ -105,8 +105,10 @@ public void testSomeMatchesSortedNumericDocValues() throws IOException { }, tdigest -> { assertEquals(7L, tdigest.state.size()); assertEquals(7L, tdigest.state.centroidCount()); - assertEquals(4.5d, tdigest.percentile(75), 0.0d); - assertEquals("4.5", tdigest.percentileAsString(75)); + assertEquals(5.0d, tdigest.percentile(75), 0.0d); + assertEquals("5.0", tdigest.percentileAsString(75)); + assertEquals(3.0d, tdigest.percentile(71), 0.0d); + assertEquals("3.0", tdigest.percentileAsString(71)); assertEquals(2.0d, tdigest.percentile(50), 0.0d); assertEquals("2.0", tdigest.percentileAsString(50)); assertEquals(1.0d, tdigest.percentile(22), 0.0d); @@ -126,11 +128,11 @@ public void testSomeMatchesNumericDocValues() throws IOException { iw.addDocument(singleton(new NumericDocValuesField("number", 0))); }, tdigest -> { assertEquals(tdigest.state.size(), 7L); - assertEquals(tdigest.state.centroidCount(), 7L); + assertTrue(tdigest.state.centroidCount() <= 7L); assertEquals(8.0d, tdigest.percentile(100), 0.0d); assertEquals("8.0", tdigest.percentileAsString(100)); - assertEquals(6.98d, tdigest.percentile(88), 0.0d); - assertEquals("6.98", tdigest.percentileAsString(88)); + assertEquals(8.0d, tdigest.percentile(88), 0.0d); + assertEquals("8.0", tdigest.percentileAsString(88)); assertEquals(1.0d, tdigest.percentile(33), 0.0d); assertEquals("1.0", tdigest.percentileAsString(33)); assertEquals(1.0d, tdigest.percentile(25), 0.0d); @@ -157,7 +159,7 @@ public void testQueryFiltering() throws IOException { assertEquals(4L, tdigest.state.centroidCount()); assertEquals(2.0d, tdigest.percentile(100), 0.0d); assertEquals(1.0d, tdigest.percentile(50), 0.0d); - assertEquals(0.5d, tdigest.percentile(25), 0.0d); + assertEquals(1.0d, tdigest.percentile(25), 0.0d); assertTrue(AggregationInspectionHelper.hasValue(tdigest)); }); diff --git a/server/src/test/java/org/opensearch/snapshots/SnapshotResiliencyTests.java b/server/src/test/java/org/opensearch/snapshots/SnapshotResiliencyTests.java index 4a18415751718..68a6af25a7c82 100644 --- a/server/src/test/java/org/opensearch/snapshots/SnapshotResiliencyTests.java +++ b/server/src/test/java/org/opensearch/snapshots/SnapshotResiliencyTests.java @@ -100,7 +100,7 @@ import org.opensearch.action.support.PlainActionFuture; import org.opensearch.action.support.TransportAction; import org.opensearch.action.support.WriteRequest; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.action.update.UpdateHelper; import org.opensearch.client.AdminClient; import org.opensearch.client.node.NodeClient; diff --git a/test/framework/src/main/java/org/opensearch/snapshots/AbstractSnapshotIntegTestCase.java b/test/framework/src/main/java/org/opensearch/snapshots/AbstractSnapshotIntegTestCase.java index e3569b08ee617..3594bf9f53ca4 100644 --- a/test/framework/src/main/java/org/opensearch/snapshots/AbstractSnapshotIntegTestCase.java +++ b/test/framework/src/main/java/org/opensearch/snapshots/AbstractSnapshotIntegTestCase.java @@ -38,7 +38,7 @@ import org.opensearch.action.index.IndexRequestBuilder; import org.opensearch.action.search.SearchRequest; import org.opensearch.action.support.PlainActionFuture; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.ClusterStateObserver; import org.opensearch.cluster.ClusterStateUpdateTask; diff --git a/test/framework/src/main/java/org/opensearch/test/TestCluster.java b/test/framework/src/main/java/org/opensearch/test/TestCluster.java index 407d9cef1f63c..26081d947431d 100644 --- a/test/framework/src/main/java/org/opensearch/test/TestCluster.java +++ b/test/framework/src/main/java/org/opensearch/test/TestCluster.java @@ -40,7 +40,7 @@ import org.opensearch.action.admin.indices.datastream.DeleteDataStreamAction; import org.opensearch.action.admin.indices.template.get.GetIndexTemplatesResponse; import org.opensearch.action.support.IndicesOptions; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.client.Client; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.metadata.IndexTemplateMetadata; diff --git a/test/framework/src/main/java/org/opensearch/test/hamcrest/OpenSearchAssertions.java b/test/framework/src/main/java/org/opensearch/test/hamcrest/OpenSearchAssertions.java index 96edfdb40e531..16d44d1f8eeb4 100644 --- a/test/framework/src/main/java/org/opensearch/test/hamcrest/OpenSearchAssertions.java +++ b/test/framework/src/main/java/org/opensearch/test/hamcrest/OpenSearchAssertions.java @@ -51,8 +51,8 @@ import org.opensearch.action.search.ShardSearchFailure; import org.opensearch.action.support.DefaultShardOperationFailedException; import org.opensearch.action.support.broadcast.BroadcastResponse; -import org.opensearch.action.support.clustermanager.AcknowledgedRequestBuilder; -import org.opensearch.action.support.clustermanager.AcknowledgedResponse; +import org.opensearch.action.support.master.AcknowledgedRequestBuilder; +import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.block.ClusterBlock; import org.opensearch.cluster.block.ClusterBlockException; import org.opensearch.cluster.metadata.IndexMetadata;