-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Mobile] iOS - ZipMap output cannot be read #22505
Comments
I was not able to access the model from provided link. I believe that zipmap op outputs map/dictionary not float |
Do you need the ZipMap operator in the model? May be simpler to return the output from the LinearClassifier directly. |
You can try "wget https://yella.co.in/cvd-samples/classifier.onnx" - worked for me. |
The Objective-C API doesn't support non-tensor ORTValue types at the moment and ZipMap does not output a tensor. A workaround for now is to avoid using non-tensor types in your model or to use the C/C++ API. For the former, directly using the tensor output of LinearClassifier should work. |
Thanks for clarifying (my tests back these conclusions). during prediction: during mlflow logging: ZipMaps appear "natural" to logging scikit classifiers (map "0" to some probability, "1" etc). If you know of some mechanism to, say force log_model to output an array instead, please lmk. I will update this thread with my findings. |
One workaround that worked for me was:
def convert_output_to_float_tensor(onnx_model_path, new_model_path):
Define the paths for the original and new modelsoriginal_model_path = "input.onnx" Run the function to modify the model and convert the output typeconvert_output_to_float_tensor(original_model_path, new_model_path)
|
There is an option to avoid the ZipMap during sklearn export to ONNX. https://onnx.ai/sklearn-onnx/auto_examples/plot_convert_zipmap.html |
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details. |
Describe the issue
I use the objc bindings for onnxruntime in my iOS app. I load my ML classifier (onnx model), supply it inputs & get valid responses
The code looks like this:
// Create ORTValue for input data
let inputTensor = try createORTValueFromEmbeddings(inputData)
I am able to process/read "output_label" (0 or 1 value) just fine as an Int64. However, "output_probability" which is a float - I just cannot read it using similar steps. Note that it is produced by a ZipMap but "output_label" isn't.
Any suggestions?
To reproduce
You cna use this onnx model:
https://yella.co.in/cvd-samples/classifier.onnx
And use the objC onnxruntime sample to load it & evaluate it to get the outputs (like above).
Urgency
Its a showstopper for me because w/o output probabilities, the model is kindof useless actually.
Platform
iOS
OS Version
17.6.1
ONNX Runtime Installation
Released Package
Compiler Version (if 'Built from Source')
No response
Package Name (if 'Released Package')
onnxruntime-mobile
ONNX Runtime Version or Commit ID
1.19.0
ONNX Runtime API
Objective-C/Swift
Architecture
ARM64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: