-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ai.onnxruntime.OrtException: Unsupported type - FLOAT16 #18926
Comments
Don't use It's noted in the PR that FP16 is currently unsupported for the methods which use arrays. I guess it could return a short array for the time being but in the future when Java has value types it would hopefully return an fp16 type. |
Ahh. So if you were already using this then you were happy with it returning |
Thanks for replying. When i used old runtime version, it will return float for float16. I will try this new API. |
I've put up a PR to fix the problem - #18937. Note, the behaviour isn't identical to before as the old fp16 conversion code was very bad at handling denormals, infinities and NaNs (in that it returned incorrect fp32 values in those cases). I've added tests which compare the output against the expected result so it should behave correctly now. |
When upgrade onnxruntime version to 1.16.x, if the onnx output type is float16, it will throw following exception:
Seems it misses FLOAT16 case here:
https://github.com/microsoft/onnxruntime/blob/main/java/src/main/java/ai/onnxruntime/TensorInfo.java#L311
Related PR: #16703
The text was updated successfully, but these errors were encountered: