Skip to content

Commit

Permalink
Fix and simplify testTargetThrottling (elastic#103397)
Browse files Browse the repository at this point in the history
There seems to be two potential issues that can cause failure: 1. If
the`INDICES_RECOVERY_MAX_BYTES_PER_SEC_SETTING` on nodeA is too small,
then we might choose a chunk size on nodeB that can cause throttling on
nodeA.  2. At the end when we remove the throttling limits, it is
possible that first nodeB gets unthrottled which can cause nodeA
throttling recovery, it seems. 

Closes elastic#103204
  • Loading branch information
pxsalehi authored Jan 4, 2024
1 parent 3ba017e commit 785a0bf
Showing 1 changed file with 5 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -782,10 +782,13 @@ public Settings onNodeStopped(String nodeName) {
* Tests shard recovery throttling on the target node. Node statistics should show throttling time on the target node, while no
* throttling should be shown on the source node because the target will accept data more slowly than the source's throttling threshold.
*/
@AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/103204")
public void testTargetThrottling() throws Exception {
logger.info("--> starting node A with default settings");
final String nodeA = internalCluster().startNode();
final String nodeA = internalCluster().startNode(
Settings.builder()
// Use a high value so that when unthrottling recoveries we do not cause accidental throttling on the source node.
.put(RecoverySettings.INDICES_RECOVERY_MAX_BYTES_PER_SEC_SETTING.getKey(), "200mb")
);

logger.info("--> creating index on node A");
ByteSizeValue shardSize = createAndPopulateIndex(INDEX_NAME, 1, SHARD_COUNT_1, REPLICA_COUNT_0).getShards()[0].getStats()
Expand Down

0 comments on commit 785a0bf

Please sign in to comment.