Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[torchlib] Add missing ops (im2col) #1757
[torchlib] Add missing ops (im2col) #1757
Changes from 5 commits
d0cfd0f
49e63b8
2c2303f
ac99103
41200fc
601f5d3
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
Check warning on line 675 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L675
Check warning on line 679 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L678-L679
Check warning on line 683 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L682-L683
Check warning on line 687 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L687
Check warning on line 689 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L689
Check warning on line 696 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L696
Check warning on line 705 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L705
Check warning on line 712 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L709-L712
Check warning on line 714 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L714
Check warning on line 734 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L732-L734
Check warning on line 738 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L737-L738
Check warning on line 742 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L741-L742
Check warning on line 746 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L745-L746
Check warning on line 750 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L749-L750
Check warning on line 755 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L752-L755
Check warning on line 757 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L757
Check warning on line 760 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L760
Check warning on line 765 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L764-L765
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possible to use slice, which is faster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can use Slice, however the indices would need to be transformed to starts, ends format adding extra Reshape and Split nodes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Then this lgtm. Thanks for explaining!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the extra-operations can be done at export-time, is that correct? That is, they depend only on export-time values (torch parameters == onnx attributes), and not on run-time values. If so, there is no need to encode them using onnx ops, as it can be done in Python? In other words, using Slice should be doable in the trace-mode without any extra cost?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is worth thinking through.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if the entire model consists of a single op-function call? I wasn't necessarily looking for something visual. Just knowing impl1 takes X time and impl2 takes Y time would be fine. The starting point would be a test-case for an op like im2col, we run its onnxscript impl exported to ORT as a model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder how correlated a tiny bench is with the e2e performance? Hopefully closely?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. We will need to avoid overheads (like copying tensors, eg. due to conversion, etc.). And not count session creation (which should be easy). May be even warm up. Should be doable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @shubhambhokare1 : I see this has been merged. I am concerned that the strategy used here might not be good, for reasons discussed above. Any thoughts about that? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @gramalingam,
Agreed with the point about the case with a large number of elements, creating these indices and using gather might be inefficient. I think I must have missed this comment thread pre-merge. Slice might be a better option.
Will add a PR on top of this to remedy this, replacing the gathers ops with slice, I guess models using im2col should be unblocked for now.
In regards to the second point, might be a good idea to create a single-op based evaluator for kernel performance. Will experiment and add that as part of the new PR.
Check warning on line 793 in onnxscript/function_libs/torch_lib/ops/nn.py
Codecov / codecov/patch
onnxscript/function_libs/torch_lib/ops/nn.py#L790-L793