You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation of the FlintIndexOpDrop class in the OpenSearch Spark project does not handle the case where an index is in the FAILED state. This can occur when a Spark streaming job terminates with an exception, causing the index state to transition to FAILED. In such cases, the current DROP index statement is unable to clean up the failed index.
What solution would you like?
The proposed solution is adding support for the FAILED precondition in the FlintIndexOpDrop class, specifically in the checkPreconditions method. This would allow the drop index statement to successfully remove indices that are in the FAILED state.
What alternatives have you considered?
One alternative would be to manually delete the failed index using the OpenSearch API. However, this requires deleting index meta log entry in internal request index too.
Is your feature request related to a problem?
The current implementation of the
FlintIndexOpDrop
class in the OpenSearch Spark project does not handle the case where an index is in theFAILED
state. This can occur when a Spark streaming job terminates with an exception, causing the index state to transition toFAILED
. In such cases, the currentDROP
index statement is unable to clean up the failed index.What solution would you like?
The proposed solution is adding support for the
FAILED
precondition in theFlintIndexOpDrop
class, specifically in thecheckPreconditions
method. This would allow the drop index statement to successfully remove indices that are in theFAILED
state.What alternatives have you considered?
One alternative would be to manually delete the failed index using the OpenSearch API. However, this requires deleting index meta log entry in internal request index too.
Do you have any additional context?
The text was updated successfully, but these errors were encountered: