-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Request grid_sample 5D support 🌟 #21382
Comments
I completely agree with @juntaosun. |
I completely agree with @cleardusk |
@liqunfu, is there plan to add the support in 1.20 release? If not, I suggest other people who are interested in it can continue from your scratch, and submit a pull request. What do you think? |
Agreed. looking forward onnx team to support andoptimize 4D/5D grid_sample op on GPU,thanks |
I hope you can pay attention to it. More and more models are being used, but |
I added/update gridsample cpu implementation when the op was added/updated in onnx as part of onnx integration with ort. The implementation was inherited from an existing contribute op. I do not see quick way to improve its performance by dozens times. Usually gridsample is preceded with an affinegrid. In this case the ops can be fused. In such case, the implementation can be greatly improved. I wonder if this is the use case? |
Describe the feature request
Many models now use grid_sample 5D calculations, but the export onnx does not seem to support it yet.
It now works on the CPU,
which makes the inference speed very slow compared to the original torch.nn.functional.grid_sample.
Searching for issues has mentioned this issue many times in the past. As of 2024-07-17, the latest onnxruntime still does not support it.
In addition, I have seen an implementation in the branch.
7c0ae44
Hope to support it as soon as possible. I think it will be great for most developers.
Describe scenario use case
I believe that many people need it ( Cuda ). Thank you for your efforts and excellent work. ❤️
The text was updated successfully, but these errors were encountered: