You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have an OpenSearch 2.13.0 cluster which use searchable snapshots. We observed an issue where when we wanted to exclude few search nodes using cluster.routing.allocation.exclude._ip setting, the shards are stuck in relocation stage .
The cluster also seemed to have issues wrt ism polices not being triggered, and any restore operations hanging. Once the setting was removed, things came back to normal. Is it expected behaviour wrt search nodes? If so what is the ideal way to scale up and scale down the search nodes ?
Related component
Search:Searchable Snapshots
To Reproduce
boot an OS cluster 2.13.0 version. Have around 40 search nodes
index some data. Take snapshot and restore on search nodes.
ensure you have enough data and shards > 400 per node
exclude 10 search nodes
Expected behavior
Nodes should have been excluded and shards should have been relocated, without any issues with ISM/other cluster activities
Additional Details
Plugins
Please list all plugins currently enabled.
Screenshots
If applicable, add screenshots to help explain your problem.
Host/Environment (please complete the following information):
OS: [e.g. iOS]
Version [e.g. 22]
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
Thanks for reporting @sivatarunp, we will try and reproduce on our end. If you could can you please provide the output to /_cat/shards here and /_cat/recovery?active_only=true when the relocation is stuck? Also how many shards per index are you configuring? Thanks.
Describe the bug
We have an OpenSearch 2.13.0 cluster which use searchable snapshots. We observed an issue where when we wanted to exclude few search nodes using
cluster.routing.allocation.exclude._ip
setting, the shards are stuck in relocation stage .The cluster also seemed to have issues wrt ism polices not being triggered, and any restore operations hanging. Once the setting was removed, things came back to normal. Is it expected behaviour wrt search nodes? If so what is the ideal way to scale up and scale down the search nodes ?
Related component
Search:Searchable Snapshots
To Reproduce
Expected behavior
Nodes should have been excluded and shards should have been relocated, without any issues with ISM/other cluster activities
Additional Details
Plugins
Please list all plugins currently enabled.
Screenshots
If applicable, add screenshots to help explain your problem.
Host/Environment (please complete the following information):
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: