Fix a race condition between ShardConsumer shutdown and initialization #1319
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When Kinesis shards have no data, there can be a race condition where the shard-end record processing from RecordProcessorThread interleaves with Scheduler performing initialization. This leads to ShardConsumer making incorrect state transition during initialization (moves from PROCESSING -> SHUTTING_DOWN) state and during shutdown handling it moves from SHUTTING_DOWN -> SHUTDOWN_COMPLETE without running the ShutdownTask.
This can cause the ShardConsumer to not perform proper shutdown processing that is required for a child shard processing to be unblocked. So the child shard could be blocked forever unless the lease for the parent shard moves to a new worker and that worker does not run into the race condition.
This patch fixes the race condition as follows:
The intializationComplete invocation is not needed after needsInitialization has been set to false. Because initializationComplete is mean to perform initialization in an async manner, but once its done, the async task is a no-op in happy-path, but it can perform incorrect state transition during a race condition.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Issue: #837