Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Reroute FetchData failures - on Replica shards in AsyncBatch mode #13705

Closed
rajiv-kv opened this issue May 16, 2024 · 2 comments
Closed
Assignees
Labels
bug Something isn't working Cluster Manager

Comments

@rajiv-kv
Copy link
Contributor

Describe the bug

When AsyncBatch operation is enabled, the following stack trace is observed when node joins the cluster.

[2024-05-08T18:10:21,704][WARN ][o.o.g.ShardsBatchGatewayAllocator_BatchID=[ztJmWY8B9dGTeLuTDkHZ]] [0fff68926c1413329ebf9f05e729f1c4] failed to list shard for batch_shards_store on node [zG4jXPwrRRuluunltN8LCw]
FailedNodeException[total failure in fetching]; nested: NullPointerException[Cannot invoke "java.lang.Integer.intValue()" because the return value of "java.util.Map.get(Object)" is null];
        at org.opensearch.gateway.AsyncShardFetch$1.onFailure(AsyncShardFetch.java:269)
        at org.opensearch.action.support.TransportAction$1.onFailure(TransportAction.java:122)
        at org.opensearch.action.ActionRunnable.onFailure(ActionRunnable.java:104)
        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:54)
        at org.opensearch.common.util.concurrent.OpenSearchExecutors$DirectExecutorService.execute(OpenSearchExecutors.java:412)
        at org.opensearch.action.support.nodes.TransportNodesAction$AsyncAction.finishHim(TransportNodesAction.java:315)
        at org.opensearch.action.support.nodes.TransportNodesAction$AsyncAction.onOperation(TransportNodesAction.java:300)
        at org.opensearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleResponse(TransportNodesAction.java:277)
        at org.opensearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleResponse(TransportNodesAction.java:269)
        at org.opensearch.transport.TransportService$9.handleResponse(TransportService.java:1723)
        at org.opensearch.security.transport.SecurityInterceptor$RestoringTransportResponseHandler.handleResponse(SecurityInterceptor.java:424)
        at org.opensearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1505)
        at org.opensearch.transport.InboundHandler.doHandleResponse(InboundHandler.java:420)
        at org.opensearch.transport.InboundHandler.handleResponse(InboundHandler.java:412)
        at org.opensearch.transport.InboundHandler.messageReceived(InboundHandler.java:172)
        at org.opensearch.transport.InboundHandler.inboundMessage(InboundHandler.java:127)
        at org.opensearch.transport.TcpTransport.inboundMessage(TcpTransport.java:770)
        at org.opensearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:175)
        at org.opensearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:150)
        at org.opensearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:115)
        at org.opensearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:95)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
        at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1475)
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1338)
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1387)
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: java.lang.NullPointerException: Cannot invoke "java.lang.Integer.intValue()" because the return value of "java.util.Map.get(Object)" is null
        at org.opensearch.gateway.AsyncShardBatchFetch$ShardBatchCache$NodeEntry.fillShardData(AsyncShardBatchFetch.java:233)
        at org.opensearch.gateway.AsyncShardBatchFetch$ShardBatchCache$NodeEntry.doneFetching(AsyncShardBatchFetch.java:211)
        at org.opensearch.gateway.AsyncShardBatchFetch$ShardBatchCache.putData(AsyncShardBatchFetch.java:158)
        at org.opensearch.gateway.AsyncShardFetchCache.processResponses(AsyncShardFetchCache.java:171)
        at org.opensearch.gateway.AsyncShardFetch.processAsyncFetch(AsyncShardFetch.java:229)
        at org.opensearch.gateway.AsyncShardFetch$1.onResponse(AsyncShardFetch.java:262)
        at org.opensearch.gateway.AsyncShardFetch$1.onResponse(AsyncShardFetch.java:259)
        at org.opensearch.action.support.TransportAction$1.onResponse(TransportAction.java:113)

The error is not consistently reproducible always and happens in random. The cluster eventually recovers to green since Reroute operation is reattempted on failures.

Related component

Cluster Manager

To Reproduce

  1. Enable AsyncBatchMode cluster.allocator.existing_shards_allocator.batch_enabled: true
  2. Set the batch size to a low value cluster.allocator.gateway.batch_size:10
  3. Create a index with Primary Count as 4 and Replica configured to be 50
  4. Restart Datanodes - one at a time in rolling fashion
  5. You will observe the Error stack trace in ClusterManager logs

Expected behavior

Reroute operation should not fail and the replica shard needs to be assigned to only one Fetch Batch

Additional Details

Plugins
Please list all plugins currently enabled.

Screenshots
If applicable, add screenshots to help explain your problem.

Host/Environment (please complete the following information):

  • OS: [e.g. iOS]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

@rajiv-kv
Copy link
Contributor Author

Based on the additional logs from impacted cluster, it is evident that ShardId is present across mulitple batches

The following Shards are part of two batches

[.opensearch-sap-log-types-config][0]
[.opensearch-sap-log-types-config][1]
[.opensearch-sap-log-types-config][2]

Batches

BatchID hFARfI8BSntRPPWZovof
BatchID hVARfI8BSntRPPWZovof

[2024-05-15T11:44:41,503][INFO ][o.o.g.ShardsBatchGatewayAllocator] [70707e330e7c81261e27798d234bfd35] Dump batchIdToStoreShardBatch
[2024-05-15T11:44:41,503][INFO ][o.o.g.ShardsBatchGatewayAllocator] [70707e330e7c81261e27798d234bfd35] Key hFARfI8BSntRPPWZovof , value [[.opensearch-sap-log-types-config][2], [.opensearch-sap-log-types-config][1], [.opensearch-sap-log-types-config][0]], BatchID hFARfI8BSntRPPWZovof
[2024-05-15T11:44:41,503][INFO ][o.o.g.ShardsBatchGatewayAllocator] [70707e330e7c81261e27798d234bfd35] Key cFARfI8BSntRPPWZNPpq , value [[.kibana_1][0]], BatchID cFARfI8BSntRPPWZNPpq
[2024-05-15T11:44:41,503][INFO ][o.o.g.ShardsBatchGatewayAllocator] [70707e330e7c81261e27798d234bfd35] Key dlARfI8BSntRPPWZSvqM , value [[.plugins-ml-config][0]], BatchID dlARfI8BSntRPPWZSvqM
[2024-05-15T11:44:41,503][INFO ][o.o.g.ShardsBatchGatewayAllocator] [70707e330e7c81261e27798d234bfd35] Key flARfI8BSntRPPWZdvq8 , value [[.opensearch-observability][0]], BatchID flARfI8BSntRPPWZdvq8
[2024-05-15T11:44:41,503][INFO ][o.o.g.ShardsBatchGatewayAllocator] [70707e330e7c81261e27798d234bfd35] Key gVARfI8BSntRPPWZkPoI , value [[.opensearch-sap-log-types-config][3], [.opensearch-sap-log-types-config][4]], BatchID gVARfI8BSntRPPWZkPoI
[2024-05-15T11:44:41,503][INFO ][o.o.g.ShardsBatchGatewayAllocator] [70707e330e7c81261e27798d234bfd35] Key glARfI8BSntRPPWZlfoZ , value [[.opendistro_security][0]], BatchID glARfI8BSntRPPWZlfoZ
[2024-05-15T11:44:41,503][INFO ][o.o.g.ShardsBatchGatewayAllocator] [70707e330e7c81261e27798d234bfd35] Key hVARfI8BSntRPPWZovof , value [[.opensearch-sap-log-types-config][2], [.opensearch-sap-log-types-config][1], [.opensearch-sap-log-types-config][0]], BatchID hVARfI8BSntRPPWZovof
[2024-05-15T11:44:41,503][INFO ][o.o.g.ShardsBatchGatewayAllocator] [70707e330e7c81261e27798d234bfd35] Dump batchIdToStoreShardBatch End

Hence when Shard [.opensearch-sap-log-types-config][0] is started , it is removed from batch which is having ongoing fetch in-progress, resulting in NPE.

[2024-05-15T11:44:54,829][WARN ][o.o.g.ShardsBatchGatewayAllocator_BatchID=[hFARfI8BSntRPPWZovof]] [70707e330e7c81261e27798d234bfd35] failed to list shard for batch_shards_store on node [fgcBL6Z2Ta234yTrbb6WsQ]
FailedNodeException[total failure in fetching]; nested: NullPointerException[Cannot invoke "java.lang.Integer.intValue()" because the return value of "java.util.Map.get(Object)" is null];
        at org.opensearch.gateway.AsyncShardFetch$1.onFailure(AsyncShardFetch.java:269)
        at org.opensearch.action.support.TransportAction$1.onFailure(TransportAction.java:122)
        at org.opensearch.action.ActionRunnable.onFailure(ActionRunnable.java:104)

@rajiv-kv
Copy link
Contributor Author

#13710 - fixed here

@rajiv-kv rajiv-kv moved this from 🆕 New to ✅ Done in Cluster Manager Project Board May 24, 2024
@rajiv-kv rajiv-kv closed this as completed Jun 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Cluster Manager
Projects
Status: ✅ Done
Development

No branches or pull requests

2 participants