-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[torchlib] Implement missing operators uncovered by torch.onnx tests #1644
Labels
contribution welcome
We welcome code contributions for this
topic: torch_lib
Related to the torch/aten function lib in development
Comments
justinchuby
added
the
topic: torch_lib
Related to the torch/aten function lib in development
label
Jun 21, 2024
justinchuby
added
the
contribution welcome
We welcome code contributions for this
label
Jun 22, 2024
58 tasks
This comment was marked as resolved.
This comment was marked as resolved.
#1750 |
shubhambhokare1
added a commit
that referenced
this issue
Jul 17, 2024
Implement missing operators uncovered by torch.onnx tests as per #1644 - [x] Implement <OpOverload(op='aten.fmod', overload='Scalar')> - [x] Implement <OpOverload(op='aten.fmod', overload='Tensor')> - [x] Implement <OpOverload(op='aten.glu', overload='default')> @shubhambhokare1 - [x] Implement <OpOverload(op='aten.le', overload='Scalar')> - [x] Implement <OpOverload(op='aten.lerp', overload='Scalar')> - [x] Implement <OpOverload(op='aten.linalg_cross', overload='default')> - [x] Implement <OpOverload(op='aten.mv', overload='default')> - [x] Implement <OpOverload(op='aten.pow', overload='Scalar')> - [x] Implement <OpOverload(op='aten.remainder', overload='Scalar')> - [x] Implement <OpOverload(op='aten.remainder', overload='Tensor')> - [x] Implement <OpOverload(op='aten.silu', overload='default')> - [x] Implement <OpOverload(op='aten.unsafe_split', overload='Tensor')> [**NOT PART OF THIS PR**] Requires adding implementation functions in torchlib eventually (not currently high in priority) - [ ] Implement `<OpOverload(op='aten.__rshift__', overload='Scalar')>` - [ ] Implement <OpOverload(op='aten._linalg_det', overload='default')> - [ ] Implement <OpOverload(op='aten._linalg_slogdet', overload='default')> - [ ] Implement <OpOverload(op='aten._prelu_kernel', overload='default')> - [ ] Implement <OpOverload(op='aten.add', overload='Scalar')> - [ ] Implement <OpOverload(op='aten.add', overload='Tensor')> - [ ] Implement <OpOverload(op='aten.affine_grid_generator', overload='default')> - [ ] Implement <OpOverload(op='aten.aminmax', overload='default')> - [ ] Implement <OpOverload(op='aten.binary_cross_entropy_with_logits', overload='default')> - [ ] Implement <OpOverload(op='aten.bitwise_and', overload='Tensor')> - [ ] Implement <OpOverload(op='aten.bucketize', overload='Tensor')> - [ ] Implement <OpOverload(op='aten.conv_tbc', overload='default')> - [ ] Implement <OpOverload(op='aten.fake_quantize_per_tensor_affine_cachemask', overload='default')> - [ ] Implement <OpOverload(op='aten.fill', overload='Scalar')> - [ ] Implement <OpOverload(op='aten.index_add', overload='default')> - [ ] Implement <OpOverload(op='aten.index_copy', overload='default')> - [ ] Implement <OpOverload(op='aten.index_fill', overload='int_Scalar')> - [ ] Implement <OpOverload(op='aten.index_put', overload='default')> - [ ] Implement <OpOverload(op='aten.masked_scatter', overload='default')> - [ ] Implement <OpOverload(op='aten.masked_select', overload='default')> - [ ] Implement <OpOverload(op='aten.prod', overload='dim_int')> - [ ] Implement <OpOverload(op='aten.rsub', overload='Tensor')> - [ ] Implement <OpOverload(op='aten.scatter', overload='src')> - [ ] Implement <OpOverload(op='aten.scatter', overload='value')> - [ ] Implement <OpOverload(op='aten.sort', overload='default')> - [ ] Implement <OpOverload(op='aten.std', overload='correction')> - [ ] Implement <OpOverload(op='aten.std_mean', overload='correction')> - [ ] Implement <OpOverload(op='aten.sym_size', overload='int')> - [ ] Implement <OpOverload(op='aten.take', overload='default')> - Implement <OpOverload(op='aten._adaptive_avg_pool2d', overload='default')> - Implement <OpOverload(op='aten._cdist_forward', overload='default')> - Implement <OpOverload(op='aten._convolution', overload='default')> - Implement <OpOverload(op='aten._fake_quantize_per_tensor_affine_cachemask_tensor_qparams', overload='default')> - Implement <OpOverload(op='aten.grid_sampler_3d', overload='default')> - Implement <OpOverload(op='aten.hann_window', overload='default')> - Implement <OpOverload(op='aten.im2col', overload='default')> - Implement <OpOverload(op='aten.repeat_interleave', overload='Tensor')> - Implement <OpOverload(op='torchvision.nms', overload='default')> - Implement <OpOverload(op='torchvision.roi_align', overload='default')> - Implement <OpOverload(op='torchvision.roi_pool', overload='default')> - [ ] Implement <OpOverload(op='aten.nan_to_num', overload='default')> - [ ] Implement <OpOverload(op='aten.nll_loss2d_forward', overload='default')> - [ ] Implement <OpOverload(op='aten.nll_loss_forward', overload='default')> - [ ] Implement <OpOverload(op='aten.norm', overload='ScalarOpt_dim_dtype')> - [ ] Implement <OpOverload(op='aten.pixel_unshuffle', overload='default')> Add operator registration - [ ] aten::empty - [ ] aten::fill - [ ] aten::getitem - [ ] aten::normal - [ ] aten::rsub - [ ] aten::scatter_reduce - [ ] aten::select - [ ] aten::slice - [ ] aten::softmax - [ ] aten::subtract - [ ] aten::transpose - [ ] aten::unbind
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
Collaborator
Author
assigned @shubhambhokare1 |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
|
This comment was marked as resolved.
This comment was marked as resolved.
justinchuby
added a commit
that referenced
this issue
Jul 22, 2024
Fix more registration issues #1644
@justinchuby Do we still need this after the decom is fixed? |
nll_loss_forward created with cross_entropy_loss. Needed because if decomposed there is a sum() step which may overflow for float16 Use NegativeLogLikelihoodLoss |
Maybe not. Thanks! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
contribution welcome
We welcome code contributions for this
topic: torch_lib
Related to the torch/aten function lib in development
NOTE: Some of these may just be incorrectly registered. In this case update the
@torch_op
decorator call in the corresponding functions to register them correctly. It's usually just correcting the overload names. "default" overload can be omitted in names. E.g.aten.add.Scalar
->@torch_op("aten::add.Scalar")
aten.glu.default
->@torch_op("aten::glu")
<OpOverload(op='aten.__rshift__', overload='Scalar')>
[torchlib] Fix linspace and full #1742The text was updated successfully, but these errors were encountered: