Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Offline optimization for CoreML #20897

Open
Rikyf3 opened this issue Jun 2, 2024 · 0 comments
Open

[Feature Request] Offline optimization for CoreML #20897

Rikyf3 opened this issue Jun 2, 2024 · 0 comments
Labels
feature request request for unsupported feature or enhancement platform:mobile issues related to ONNX Runtime mobile; typically submitted using template

Comments

@Rikyf3
Copy link

Rikyf3 commented Jun 2, 2024

Describe the feature request

When you load a model using CoreMLExecutionProvider it takes a very long time, even minutes for big models. If you look at the logging of onnxruntime you can see that it is saving .mlpackage files in a temp folder. Can the possibility of loading saved compiled model, with offline optimization, like as in other EPs be added to CoreML EP?

Describe scenario use case

To speedup model loading when using CoreML provider.

@Rikyf3 Rikyf3 added the feature request request for unsupported feature or enhancement label Jun 2, 2024
@github-actions github-actions bot added the platform:mobile issues related to ONNX Runtime mobile; typically submitted using template label Jun 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request request for unsupported feature or enhancement platform:mobile issues related to ONNX Runtime mobile; typically submitted using template
Projects
None yet
Development

No branches or pull requests

1 participant