-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Allow downloading ML models from Cloud providers buckets #1371
Comments
hi, @IanMenendez thanks, this is a good suggestion. Actually ml-commons supports uploading model from local file now. You can refer to https://opensearch-project.github.io/opensearch-py-ml/examples/demo_deploy_cliptextmodel.html and https://opensearch-project.github.io/opensearch-py-ml/examples/demo_transformer_model_train_save_upload_to_openSearch.html#Step-5:-Upload-the-model-to-OpenSearch-cluster |
@ylwu-amzn I am aware that uploading models from local is supported but by going down this path we are sacrificing simplicity as:
|
Make sense. Agree that supporting ML models from cloud provider is a good feature. Will add this to dev plan. But appreciate if you prefer to contribute. |
@ylwu-amzn I would like to contribute to this feature as part of the OSCI-2023 program. |
@ylwu-amzn can we assign this one to @rishabh1815769 ? @rishabh1815769 Have you made any progress on this? |
Thanks @austintlee Sure, assign to @rishabh1815769, one suggestion: share your design on this issue first so others can help review. |
Hi experts, do we have any update for this feature, the current project I'm working on needs this feature to allow customers to upload their model from OCI(Oracle cloud infrastructure) object storage |
@rishabh1815769 any update ? @khoaisohd Thanks, I think you can post your design to this issue first, so community can discuss together. |
Problem StatementToday, customers can upload models to OpenSearch using public URLs or local file systems. There is no solution to upload ML models from OCI object storage buckets (or other cloud providers) to OpenSearch. The only way is to make bucket objects public and use the public URL, therefore exposing ML models to anyone. This makes the process of uploading a model quite tedious as we need to upload the model in chunks from our local because we do not want to expose ML models to the public internet. To address those pain points, we need to introduce a solution for uploading ML models to OpenSearch from cloud providers. ScopeSupport upload models from cloud provider AssumptionThe OpenSearch clusters are runnings on the compute instances that have a connection and permission to download model files from the cloud provider storage service. For example, in OCI instances principal token and resource principals token are already provisioned in the OpenSearch cluster nodes ProposalTo allow customers to upload models from cloud providers to OpenSearch, we need to extend the model input to allow customers to provide information about the cloud provider and the model file location. Furthermore, cloud provider Java SDK dependency (OCI SDK, AWS SDK) is also added to the ml-commons plugin to allow it to download files from cloud provider storage service When the customer uploads a model from the cloud provider, here is what ml-common plugins do
Model additional attributesDepending on each cloud provider, we might need to provide additional setting OCI object storage additional settings
Sample for resource principal
Sample for instance principal
Sample for user principal
|
I haven't done any POC for S3 to validate the design, however we can propose something similar to the following based on what we have in notification plugin S3 additional settings
Sample of for s3 model input
|
@khoaisohd This implementation looks good to me and it is exactly how we implemented this in our internal systems. Just as a note:
Some SDK (GCloud for example no idea about others) can read objects by chunks. So you can upload models by chunks without even downloading the model. Essentially: read chunk into buffer from cloud provider -> upload to OS. |
@IanMenendez Thank you for reviewing the design doc, it is an awesome idea to use SDK to read objects by chunks and upload them directly to OS, I think other providers also support reading an object by chunks as well. Furthermore, I think that idea is not only applied to upload models from cloud providers but also for the public URL or file systems. In the current implementation, the inputStream (from public URL and file system) is copied into outputStream (for the target downloaded file) by chunk as well. However, for the scope of this feature, should we continue with the same approach we are using today by just download model to the local file system then split it into small files to avoid a major refactoring? After that, we can have another follow-up PR for the optimization you suggested and it will be applied to all use cases, public URL, local file system, cloud providers, etc. |
For S3, this might find this useful. |
I would also prefer a streaming approach that does not leave a copy on local disk. |
@austintlee @IanMenendez @ylwu-amzn , we can use the streaming approach for cloud providers, and we can make it work for the current public URL as well as the file system option as well to avoid leaving a copy on local disk. I'll update the design to use the streaming approach. Furthermore, do we have any concerns about the proposed customer experience for this feature
|
@ylwu-amzn @IanMenendez @rishabh1815769 what is the status of this? |
As far as I know no one is working on a change. Please go ahead and create a PR |
@samuel-oci feel free to cut RFC to discuss first |
Sounds good, @khoaisohd can you create an RFC with the proposed solution? I think you can also provide two options:
|
For anyone interested RFC is here: |
Is your feature request related to a problem?
There is no solution to upload ML models from Google Cloud buckets (or other cloud providers) to OpenSearch. The only way is to make bucket objects public and use the public URL, therefore exposing ML models to anyone.
This makes the process of uploading a model quite tedious as we need to upload the model by chunks from our local because we do not want to expose ML models to the public internet
What solution would you like?
ModelHelper.downloadAndSplit
currently usesai.djl.training.util.DownloadUtils
to download ML models. The solution would be to add (or check if there is an already coded solution) a helper function that allows gcloud (gs://bucket/model.zip) or AWS (s3://bucket/model.zip) type linksWhat alternatives have you considered?
Making bucket publicly accessible to anyone or uploading model from local making the process tedious
Do you have any additional context?
No
The text was updated successfully, but these errors were encountered: