-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Batch ingestion API bugs #2930
Comments
Another issue,
This way can work
|
Yes it's still using the TRAIN thread pool. The initial code doesn't use this dedicated Train thread so the exceptions are caught in the main thread and ML Tasks are updated to "Failed". After I added this "TRAIN" thread, the exceptions handle in the Train thread so they are not caught in the main anymore. I forgot to move the catch exceptions from the Main to "TRAIN". After the load tests, I will create a new thread pool just for Ingestion. |
Bedrock batch inference job returns
But the code currently only parse Suggest change this line https://github.com/opensearch-project/ml-commons/blob/main/plugin/src/main/java/org/opensearch/ml/task/MLPredictTaskRunner.java#L367C44-L367C52
to
? |
Test with OS2.17 RC4
sample data of my_batch2.jsonl.out
It returns task id
xHk64pEBG9EkCQDLzc-I
But this task stays on
CREATED
forever. Checked log , error happensRemove
source[0].
fromembedding
field map can workSuggestion:
source[0]
prefix even we have one source fileopensearch_ml_train
? Can you confirm if we have dedicated thread pool ?The text was updated successfully, but these errors were encountered: