Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix flaky SegmentReplicationITs. #6015

Merged
merged 4 commits into from
Jan 27, 2023
Merged

Fix flaky SegmentReplicationITs. #6015

merged 4 commits into from
Jan 27, 2023

Conversation

mch2
Copy link
Member

@mch2 mch2 commented Jan 25, 2023

Signed-off-by: Marc Handalian [email protected]

Description

This change fixes flakiness with segment replication ITs. It does this by updating the wait condition used to ensure replicas are up to date to wait until a searched docCount is reached instead of output of the Segments API that can change if there are concurrent refreshes.
It also does this by updating the method used to assert segment stats to wait until the assertion holds true rather than at a point in time. This method is also updated to assert store metadata directly over API output. In doing so I've moved the method used to compute store metadata on top of IndexShard.

I've run this about ~1k times on the file locally and not seeing any issues, will open this and run a few times to test from CI.

Issues Resolved

#5669 - note not all tests in this issue will be resolved with this change, only those with doc count mismatches, particularly testReplicationAfterPrimaryRefreshAndFlush.

Check List

  • New functionality includes testing.
    • All tests pass
  • New functionality has been documented.
    • New functionality has javadoc added
  • Commits are signed per the DCO using --signoff
  • Commit changes are listed out in CHANGELOG.md file (See: Changelog)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@github-actions

This comment was marked as outdated.

@mch2

This comment was marked as outdated.

@github-actions

This comment was marked as outdated.

This change fixes flakiness with segment replication ITs.
It does this by updating the wait condition used to ensure replicas are up to date
to wait until a searched docCount is reached instead of output of the Segments API that can change
if there are concurrent refreshes.
It also does this by updating the method used to assert segment stats to wait until the assertion holds true rather
than at a point in time.  This method is also updated to assert store metadata directly over API output.

Signed-off-by: Marc Handalian <[email protected]>
@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.indices.replication.SegmentReplicationIT.testDropPrimaryDuringReplication
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockWithAReadOnlyBlock
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockIsRemovedWhenAnyNodesNotExceedHighWatermark

@codecov-commenter
Copy link

codecov-commenter commented Jan 26, 2023

Codecov Report

Merging #6015 (66f9042) into main (f9eb9bf) will increase coverage by 0.56%.
The diff coverage is 100.00%.

📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more

@@             Coverage Diff              @@
##               main    #6015      +/-   ##
============================================
+ Coverage     70.73%   71.30%   +0.56%     
- Complexity    58738    59132     +394     
============================================
  Files          4771     4771              
  Lines        280820   280818       -2     
  Branches      40568    40568              
============================================
+ Hits         198645   200243    +1598     
+ Misses        65865    64461    -1404     
+ Partials      16310    16114     -196     
Impacted Files Coverage Δ
...in/java/org/opensearch/index/shard/IndexShard.java 70.62% <100.00%> (+0.62%) ⬆️
.../indices/replication/SegmentReplicationTarget.java 92.92% <100.00%> (+1.66%) ⬆️
...n/java/org/opensearch/test/rest/yaml/Features.java 60.00% <0.00%> (-20.00%) ⬇️
...org/opensearch/index/query/ExistsQueryBuilder.java 74.19% <0.00%> (-19.36%) ⬇️
...min/indices/stats/TransportIndicesStatsAction.java 73.91% <0.00%> (-17.40%) ⬇️
...r/src/main/java/org/opensearch/http/HttpUtils.java 50.00% <0.00%> (-16.67%) ⬇️
.../node/tasks/cancel/TransportCancelTasksAction.java 66.66% <0.00%> (-16.67%) ⬇️
...ons/bucket/composite/PointsSortedDocsProducer.java 76.00% <0.00%> (-12.00%) ⬇️
...uster/repositories/get/GetRepositoriesRequest.java 33.33% <0.00%> (-11.12%) ⬇️
...n/java/org/opensearch/action/bulk/BulkRequest.java 74.38% <0.00%> (-10.75%) ⬇️
... and 486 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@mch2
Copy link
Member Author

mch2 commented Jan 26, 2023

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.indices.replication.SegmentReplicationIT.testDropPrimaryDuringReplication
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockWithAReadOnlyBlock
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockIsRemovedWhenAnyNodesNotExceedHighWatermark

org.opensearch.indices.replication.SegmentReplicationIT.testDropPrimaryDuringReplication grr.. looking at this one...

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.index.ShardIndexingPressureIT.testShardIndexingPressureTrackingDuringBulkWrites
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockWhenAllNodesExceededHighWatermark
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockIsRemovedWhenAnyNodesNotExceedHighWatermark

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockIsRemovedWhenAnyNodesNotExceedHighWatermark

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockWithAReadOnlyBlock
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockIsRemovedWhenAnyNodesNotExceedHighWatermark

@mch2
Copy link
Member Author

mch2 commented Jan 26, 2023

last few runs related to DiskThresholdDeciderIT failures. #5956

I've run SR test testDropPrimaryDuringReplication overnight 1,150 times locally while running the entire test class and not able to repro this. Will try some more today. In the meantime improved the error msg and kicking off a few more check runs on this PR.

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.snapshots.DedicatedClusterSnapshotRestoreIT.testIndexDeletionDuringSnapshotCreationInQueue

final Store.RecoveryDiff diff = Store.segmentReplicationDiff(checkpointInfo.getMetadataMap(), getMetadataMap());
logger.trace("Replication diff {}", diff);
final Store.RecoveryDiff diff = Store.segmentReplicationDiff(checkpointInfo.getMetadataMap(), indexShard.getSegmentMetadataMap());
logger.trace("Replication diff for checkpoint {} {}", checkpointInfo.getCheckpoint(), diff);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing this.

Comment on lines 670 to 673
final ShardRouting replicaShardRouting = shardSegments.getShardRouting();
ClusterState state = client(internalCluster().getClusterManagerName()).admin().cluster().prepareState().get().getState();
final DiscoveryNode replicaNode = state.nodes().resolveNode(replicaShardRouting.currentNodeId());
return getIndexShard(replicaNode.getName());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about ?
return getIndexShard(shardSegments.getShardRouting().currentNodeId()) ;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may not be needed based on #6015 (comment)

Comment on lines 621 to 645
assertBusy(() -> {
final IndicesSegmentResponse indicesSegmentResponse = client().admin()
.indices()
.segments(new IndicesSegmentsRequest())
.actionGet();
List<ShardSegments[]> segmentsByIndex = getShardSegments(indicesSegmentResponse);

// Fetch the IndexShard for this replica and try and build its SegmentInfos from the previous commit point.
// This ensures the previous commit point is not wiped.
final ShardRouting replicaShardRouting = shardSegment.getShardRouting();
ClusterState state = client(internalCluster().getMasterName()).admin().cluster().prepareState().get().getState();
final DiscoveryNode replicaNode = state.nodes().resolveNode(replicaShardRouting.currentNodeId());
IndexShard indexShard = getIndexShard(replicaNode.getName());
// calls to readCommit will fail if a valid commit point and all its segments are not in the store.
indexShard.store().readLastCommittedSegmentsInfo();
// There will be an entry in the list for each index.
assertEquals("Expected a different number of shards in the index", numberOfShards, segmentsByIndex.size());
for (ShardSegments[] replicationGroupSegments : segmentsByIndex) {
// Separate Primary & replica shards ShardSegments.
final Map<Boolean, List<ShardSegments>> segmentListMap = segmentsByShardType(replicationGroupSegments);
final List<ShardSegments> primaryShardSegmentsList = segmentListMap.get(true);
final List<ShardSegments> replicaShardSegmentsList = segmentListMap.get(false);
assertEquals("There should only be one primary in the replicationGroup", 1, primaryShardSegmentsList.size());
assertEquals(
"There should be a ShardSegment entry for each replica in the replicationGroup",
numberOfReplicas,
replicaShardSegmentsList.size()
);
final ShardSegments primaryShardSegments = primaryShardSegmentsList.stream().findFirst().get();
final IndexShard primaryShard = getIndexShard(primaryShardSegments);
final Map<String, StoreFileMetadata> primarySegmentMetadata = primaryShard.getSegmentMetadataMap();
for (ShardSegments replicaShardSegments : replicaShardSegmentsList) {
final IndexShard replicaShard = getIndexShard(replicaShardSegments);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see usage of List<ShardSegments[]> segmentsByIndex other than finding primary/replica set corresponding to an index and then verify store content. Finding this little complicated and feel can be simplified by iterating over routing table. With this, below two methods can be removed.

  1. private IndexShard getIndexShard(ShardSegments shardSegments).
  2. private Map<Boolean, List<ShardSegments>> segmentsByShardType(ShardSegments[] replicationGroupSegments)
for(IndexRoutingTable indexRoutingTable: clusterState.routingTable()) {
    for(IndexShardRoutingTable shardRoutingTable: indexRoutingTable) {
        final ShardRouting primaryRouting = shardRoutingTable.primaryShard();
        final String indexName = primaryRouting.getIndexName();
        final List<ShardRouting> replicaRouting = shardRoutingTable.replicaShards();
        final IndexShard primaryShard = getIndexShard(shardRoutingTable.primaryShard().currentNodeId(), indexName);
        for(ShardRouting replica: replicaRouting) {
            IndexShard replicaShard = getIndexShard(replica.currentNodeId(), indexName);
            // Compare store content
        }
    }
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're right, there's really no reason to use the Segments API anymore only for the node Ids. Will update.

}

public void testDropPrimaryDuringReplication() throws Exception {
int replica_count = 6;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

final ?

assertEquals(replicaSegment.getDeletedDocs(), primarySegment.getDeletedDocs());
assertEquals(replicaSegment.getSize(), primarySegment.getSize());
}
private void assertIdenticalSegments(int numberOfShards, int numberOfReplicas) throws Exception {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: assertIdenticalSegments -> verifyStoreContent echoes better with method definition.

Signed-off-by: Marc Handalian <[email protected]>
@github-actions

This comment was marked as outdated.

@dreamer-89
Copy link
Member

Precommit failure due to spotless check

* What went wrong:
Execution failed for task ':server:spotlessJavaCheck'.
> The following files had format violations:
      src/internalClusterTest/java/org/opensearch/indices/replication/SegmentReplicationIT.java
          @@ -210,7 +210,7 @@
           ·····*·<p>
1066 actionable tasks: 1065 executed, 1 from cache
           ·····*·TODO:·Ignoring·this·test·as·its·flaky·and·needs·separate·fix
           ·····*/
          -·····@AwaitsFix(bugUrl·=·"https://github.com/opensearch-project/OpenSearch/issues/5669")
          +····@AwaitsFix(bugUrl·=·"https://github.com/opensearch-project/OpenSearch/issues/5669")
           ····public·void·testAddNewReplicaFailure()·throws·Exception·{
           ········logger.info("-->·starting·[Primary·Node]·...");
           ········final·String·primaryNode·=·internalCluster().startNode();
  Run './gradlew :server:spotlessApply' to fix these violations.

Signed-off-by: Marc Handalian <[email protected]>
@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.indices.replication.SegmentReplicationIT.testAddNewReplicaFailure
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockWithAReadOnlyBlock
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockWhenAllNodesExceededHighWatermark

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

Copy link
Member

@dreamer-89 dreamer-89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thank you @mch2 for making this change.

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.index.IndexServiceTests.testAsyncTranslogTrimTaskOnClosedIndex
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockWithAReadOnlyBlock
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockWhenAllNodesExceededHighWatermark
      1 org.opensearch.cluster.allocation.AwarenessAllocationIT.testThreeZoneOneReplicaWithForceZoneValueAndLoadAwareness

@mch2
Copy link
Member Author

mch2 commented Jan 27, 2023

  1 org.opensearch.indices.replication.SegmentReplicationIT.testAddNewReplicaFailure

This flaky failure was because I pushed an unintentional commit to unMute this test. This test is not fixed as part of this PR and remains muted in further commits.

@mch2 mch2 merged commit ade01ec into opensearch-project:main Jan 27, 2023
@mch2 mch2 deleted the flakysr branch January 27, 2023 03:46
@dreamer-89 dreamer-89 added the backport 2.x Backport to 2.x branch label Jan 27, 2023
opensearch-trigger-bot bot pushed a commit that referenced this pull request Jan 27, 2023
* Fix flaky SegmentReplicationITs.

This change fixes flakiness with segment replication ITs.
It does this by updating the wait condition used to ensure replicas are up to date
to wait until a searched docCount is reached instead of output of the Segments API that can change
if there are concurrent refreshes.
It also does this by updating the method used to assert segment stats to wait until the assertion holds true rather
than at a point in time.  This method is also updated to assert store metadata directly over API output.

Signed-off-by: Marc Handalian <[email protected]>

* Fix error message to print expected and actual doc counts.

Signed-off-by: Marc Handalian <[email protected]>

* PR feedback.

Signed-off-by: Marc Handalian <[email protected]>

* spotless.

Signed-off-by: Marc Handalian <[email protected]>

Signed-off-by: Marc Handalian <[email protected]>
(cherry picked from commit ade01ec)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
@dreamer-89
Copy link
Member

Backporting this to 2.x, without this all changes on tests will cause conflict on backports. Found this during backport of #5898 #5898 (comment)

dreamer-89 pushed a commit to dreamer-89/OpenSearch that referenced this pull request Jan 27, 2023
* Fix flaky SegmentReplicationITs.

This change fixes flakiness with segment replication ITs.
It does this by updating the wait condition used to ensure replicas are up to date
to wait until a searched docCount is reached instead of output of the Segments API that can change
if there are concurrent refreshes.
It also does this by updating the method used to assert segment stats to wait until the assertion holds true rather
than at a point in time.  This method is also updated to assert store metadata directly over API output.

Signed-off-by: Marc Handalian <[email protected]>

* Fix error message to print expected and actual doc counts.

Signed-off-by: Marc Handalian <[email protected]>

* PR feedback.

Signed-off-by: Marc Handalian <[email protected]>

* spotless.

Signed-off-by: Marc Handalian <[email protected]>

Signed-off-by: Marc Handalian <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport 2.x Backport to 2.x branch skip-changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants