Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[torchlib] Implement missing operators uncovered by torch.onnx tests #1644

Closed
24 of 58 tasks
justinchuby opened this issue Jun 21, 2024 · 15 comments
Closed
24 of 58 tasks
Assignees
Labels
contribution welcome We welcome code contributions for this topic: torch_lib Related to the torch/aten function lib in development

Comments

@justinchuby
Copy link
Collaborator

justinchuby commented Jun 21, 2024

NOTE: Some of these may just be incorrectly registered. In this case update the @torch_op decorator call in the corresponding functions to register them correctly. It's usually just correcting the overload names. "default" overload can be omitted in names. E.g.

aten.add.Scalar -> @torch_op("aten::add.Scalar")
aten.glu.default -> @torch_op("aten::glu")

  • Implement <OpOverload(op='aten.__rshift__', overload='Scalar')> [torchlib] Fix linspace and full #1742
  • Implement <OpOverload(op='aten._adaptive_avg_pool2d', overload='default')>
  • Implement <OpOverload(op='aten._cdist_forward', overload='default')>
  • Implement <OpOverload(op='aten._convolution', overload='default')>
  • Implement <OpOverload(op='aten._fake_quantize_per_tensor_affine_cachemask_tensor_qparams', overload='default')>
  • Implement <OpOverload(op='aten._linalg_det', overload='default')>
  • Implement <OpOverload(op='aten._linalg_slogdet', overload='default')>
  • Implement <OpOverload(op='aten._prelu_kernel', overload='default')>
  • Implement <OpOverload(op='aten.add', overload='Scalar')>
  • Implement <OpOverload(op='aten.add', overload='Tensor')>
  • Implement <OpOverload(op='aten.affine_grid_generator', overload='default')>
  • Implement <OpOverload(op='aten.aminmax', overload='default')>
  • Implement <OpOverload(op='aten.binary_cross_entropy_with_logits', overload='default')>
  • Implement <OpOverload(op='aten.bitwise_and', overload='Tensor')>
  • Implement <OpOverload(op='aten.bucketize', overload='Tensor')>
  • Implement <OpOverload(op='aten.conv_tbc', overload='default')>
  • Implement <OpOverload(op='aten.fake_quantize_per_tensor_affine_cachemask', overload='default')>
  • Implement <OpOverload(op='aten.fill', overload='Scalar')>
  • Implement <OpOverload(op='aten.fmod', overload='Scalar')>
  • Implement <OpOverload(op='aten.fmod', overload='Tensor')>
  • Implement <OpOverload(op='aten.glu', overload='default')> @shubhambhokare1
  • Implement <OpOverload(op='aten.grid_sampler_3d', overload='default')>
  • Implement <OpOverload(op='aten.hann_window', overload='default')>
  • Implement <OpOverload(op='aten.im2col', overload='default')>
  • Implement <OpOverload(op='aten.index_add', overload='default')>
  • Implement <OpOverload(op='aten.index_copy', overload='default')>
  • Implement <OpOverload(op='aten.index_fill', overload='int_Scalar')>
  • Implement <OpOverload(op='aten.index_put', overload='default')>
  • Implement <OpOverload(op='aten.le', overload='Scalar')>
  • Implement <OpOverload(op='aten.lerp', overload='Scalar')>
  • Implement <OpOverload(op='aten.linalg_cross', overload='default')>
  • Implement <OpOverload(op='aten.masked_scatter', overload='default')>
  • Implement <OpOverload(op='aten.masked_select', overload='default')>
  • Implement <OpOverload(op='aten.mv', overload='default')>
  • Implement <OpOverload(op='aten.nan_to_num', overload='default')>
  • Implement <OpOverload(op='aten.nll_loss2d_forward', overload='default')>
  • Implement <OpOverload(op='aten.nll_loss_forward', overload='default')>
  • Implement <OpOverload(op='aten.norm', overload='ScalarOpt_dim_dtype')>
  • Implement <OpOverload(op='aten.pixel_unshuffle', overload='default')>
  • Implement <OpOverload(op='aten.pow', overload='Scalar')>
  • Implement <OpOverload(op='aten.prod', overload='dim_int')>
  • Implement <OpOverload(op='aten.remainder', overload='Scalar')>
  • Implement <OpOverload(op='aten.remainder', overload='Tensor')>
  • Implement <OpOverload(op='aten.repeat_interleave', overload='Tensor')>
  • Implement <OpOverload(op='aten.rsub', overload='Tensor')>
  • Implement <OpOverload(op='aten.scatter', overload='src')>
  • Implement <OpOverload(op='aten.scatter', overload='value')>
  • Implement <OpOverload(op='aten.silu', overload='default')>
  • Implement <OpOverload(op='aten.sort', overload='default')>
  • Implement <OpOverload(op='aten.std', overload='correction')> Add op (std, std.dim, std.correction) | feat(torchlib) #1747
  • Implement <OpOverload(op='aten.std_mean', overload='correction')> Add op (std_mean, std_mean.dim, std_mean.correction) | feat(torchlib) #1748 (comment)
  • Implement <OpOverload(op='aten.sym_size', overload='int')>
  • Implement <OpOverload(op='aten.take', overload='default')>
  • Implement <OpOverload(op='aten.unsafe_split', overload='Tensor')>
  • Implement <OpOverload(op='torchvision.nms', overload='default')>
  • Implement <OpOverload(op='torchvision.roi_align', overload='default')>
  • Implement <OpOverload(op='torchvision.roi_pool', overload='default')>
  • Implement <OpOverload(op='aten.group_norm', overload='default')> Add Op (group_norm) | feat(torchlib) #1750
@justinchuby justinchuby added the topic: torch_lib Related to the torch/aten function lib in development label Jun 21, 2024
@justinchuby justinchuby added the contribution welcome We welcome code contributions for this label Jun 22, 2024
@shubhambhokare1 shubhambhokare1 self-assigned this Jun 24, 2024
@justinchuby

This comment was marked as resolved.

@justinchuby
Copy link
Collaborator Author

justinchuby commented Jul 8, 2024

<OpOverload(op='aten.group_norm', overload='default')

#1750
assigned @titaiwangms

shubhambhokare1 added a commit that referenced this issue Jul 17, 2024
Implement missing operators uncovered by torch.onnx tests as per #1644

- [x] Implement <OpOverload(op='aten.fmod', overload='Scalar')>
- [x] Implement <OpOverload(op='aten.fmod', overload='Tensor')>
- [x] Implement <OpOverload(op='aten.glu', overload='default')>
@shubhambhokare1
- [x] Implement <OpOverload(op='aten.le', overload='Scalar')>
- [x] Implement <OpOverload(op='aten.lerp', overload='Scalar')>
- [x] Implement <OpOverload(op='aten.linalg_cross', overload='default')>
- [x] Implement <OpOverload(op='aten.mv', overload='default')>
- [x] Implement <OpOverload(op='aten.pow', overload='Scalar')>

- [x] Implement <OpOverload(op='aten.remainder', overload='Scalar')>
- [x] Implement <OpOverload(op='aten.remainder', overload='Tensor')>
- [x] Implement <OpOverload(op='aten.silu', overload='default')>
- [x] Implement <OpOverload(op='aten.unsafe_split', overload='Tensor')>

[**NOT PART OF THIS PR**] Requires adding implementation functions in
torchlib eventually (not currently high in priority)

- [ ] Implement `<OpOverload(op='aten.__rshift__', overload='Scalar')>`
- [ ] Implement <OpOverload(op='aten._linalg_det', overload='default')>
- [ ] Implement <OpOverload(op='aten._linalg_slogdet',
overload='default')>
- [ ] Implement <OpOverload(op='aten._prelu_kernel',
overload='default')>
- [ ] Implement <OpOverload(op='aten.add', overload='Scalar')>
- [ ] Implement <OpOverload(op='aten.add', overload='Tensor')>
- [ ] Implement <OpOverload(op='aten.affine_grid_generator',
overload='default')>
- [ ] Implement <OpOverload(op='aten.aminmax', overload='default')>
- [ ] Implement <OpOverload(op='aten.binary_cross_entropy_with_logits',
overload='default')>
- [ ] Implement <OpOverload(op='aten.bitwise_and', overload='Tensor')>
- [ ] Implement <OpOverload(op='aten.bucketize', overload='Tensor')>
- [ ] Implement <OpOverload(op='aten.conv_tbc', overload='default')>
- [ ] Implement
<OpOverload(op='aten.fake_quantize_per_tensor_affine_cachemask',
overload='default')>
- [ ] Implement <OpOverload(op='aten.fill', overload='Scalar')>
- [ ] Implement <OpOverload(op='aten.index_add', overload='default')>
- [ ] Implement <OpOverload(op='aten.index_copy', overload='default')>
- [ ] Implement <OpOverload(op='aten.index_fill',
overload='int_Scalar')>
- [ ] Implement <OpOverload(op='aten.index_put', overload='default')>
- [ ] Implement <OpOverload(op='aten.masked_scatter',
overload='default')>
- [ ] Implement <OpOverload(op='aten.masked_select',
overload='default')>
- [ ] Implement <OpOverload(op='aten.prod', overload='dim_int')>
- [ ] Implement <OpOverload(op='aten.rsub', overload='Tensor')>
- [ ] Implement <OpOverload(op='aten.scatter', overload='src')>
- [ ] Implement <OpOverload(op='aten.scatter', overload='value')>
- [ ] Implement <OpOverload(op='aten.sort', overload='default')>
- [ ] Implement <OpOverload(op='aten.std', overload='correction')>
- [ ] Implement <OpOverload(op='aten.std_mean', overload='correction')>
- [ ] Implement <OpOverload(op='aten.sym_size', overload='int')>
- [ ] Implement <OpOverload(op='aten.take', overload='default')>
- Implement <OpOverload(op='aten._adaptive_avg_pool2d',
overload='default')>
- Implement <OpOverload(op='aten._cdist_forward', overload='default')>
- Implement <OpOverload(op='aten._convolution', overload='default')>
- Implement
<OpOverload(op='aten._fake_quantize_per_tensor_affine_cachemask_tensor_qparams',
overload='default')>
- Implement <OpOverload(op='aten.grid_sampler_3d', overload='default')>
- Implement <OpOverload(op='aten.hann_window', overload='default')>
- Implement <OpOverload(op='aten.im2col', overload='default')>
- Implement <OpOverload(op='aten.repeat_interleave', overload='Tensor')>
- Implement <OpOverload(op='torchvision.nms', overload='default')>
- Implement <OpOverload(op='torchvision.roi_align', overload='default')>
- Implement <OpOverload(op='torchvision.roi_pool', overload='default')>
- [ ] Implement <OpOverload(op='aten.nan_to_num', overload='default')>
- [ ] Implement <OpOverload(op='aten.nll_loss2d_forward',
overload='default')>
- [ ] Implement <OpOverload(op='aten.nll_loss_forward',
overload='default')>
- [ ] Implement <OpOverload(op='aten.norm',
overload='ScalarOpt_dim_dtype')>
- [ ] Implement <OpOverload(op='aten.pixel_unshuffle',
overload='default')>

Add operator registration

- [ ] aten::empty
- [ ] aten::fill
- [ ] aten::getitem
- [ ] aten::normal
- [ ] aten::rsub
- [ ] aten::scatter_reduce
- [ ] aten::select
- [ ] aten::slice
- [ ] aten::softmax
- [ ] aten::subtract
- [ ] aten::transpose
- [ ] aten::unbind
@justinchuby

This comment was marked as resolved.

@justinchuby

This comment was marked as resolved.

@justinchuby

This comment was marked as resolved.

@justinchuby
Copy link
Collaborator Author

justinchuby commented Jul 17, 2024

#1757

  • aten.im2col.default: No decompositions registered for the real-valued input. Example node: %im2col : [num_users=1] = call_function[target=torch.ops.aten.im2col.default](args = (%permute_1, [3, 3], [1, 1], [1, 1], [2, 2]), kwargs = {}). All nodes: [im2col, im2col_1, im2col_2, im2col_3]

assigned @shubhambhokare1

@justinchuby

This comment was marked as resolved.

@justinchuby

This comment was marked as resolved.

@justinchuby
Copy link
Collaborator Author

justinchuby commented Jul 17, 2024

  • aten.max.other: All overloads did not match the node %max_1 : [num_users=1] = call_function[target=torch.ops.aten.max.other](args = (%add_3, %detach_1), kwargs = {}).
    • Failed to match overload OnnxFunction(<function aten_maximum at 0x7f13e81e4040>): Parameter type not compatible with argument: param=other: TReal, assigned_types={'TReal': Tensor(FLOAT16)}, arg=FLOAT
    • Failed to match overload OnnxFunction(<function aten_maximum_bool at 0x7f13e81e5bc0>): Parameter type not compatible with argument: param=self: T_self, assigned_types={}, arg=FLOAT16. Example node: %max_1 : [num_users=1] = call_function[target=torch.ops.aten.max.other](args = (%add_3, %detach_1), kwargs = {}). All nodes: [max_1, max_2, max_3, max_4, max_5, max_6, max_7, max_8, max_9, max_10, max_11, max_12, max_13, max_14, max_15, max_16, max_17, max_18, max_19, max_20, max_21, max_22, max_23, max_24]

@justinchuby

@justinchuby

This comment was marked as resolved.

@justinchuby
Copy link
Collaborator Author

@titaiwangms
Copy link
Contributor

  • aten.cudnn_batch_norm.default: No decompositions registered for the real-valued input. Example node: %cudnn_batch_norm : [num_users=1] = call_function[target=torch.ops.aten.cudnn_batch_norm.default](args = (%conv2d, %p_patch_embed_proj_0_1_weight, %p_patch_embed_proj_0_1_bias, %b_patch_embed_proj_0_1_running_mean, %b_patch_embed_proj_0_1_running_var, False, 0.1, 1e-05), kwargs = {}). All nodes: [cudnn_batch_norm, cudnn_batch_norm_1, cudnn_batch_norm_2, cudnn_batch_norm_3, cudnn_batch_norm_4, cudnn_batch_norm_5, cudnn_batch_norm_6, cudnn_batch_norm_7, cudnn_batch_norm_8, cudnn_batch_norm_9, cudnn_batch_norm_10, cudnn_batch_norm_11, cudnn_batch_norm_12, cudnn_batch_norm_13, cudnn_batch_norm_14, cudnn_batch_norm_15, cudnn_batch_norm_16, cudnn_batch_norm_17, cudnn_batch_norm_18, cudnn_batch_norm_19, cudnn_batch_norm_20, cudnn_batch_norm_21, cudnn_batch_norm_22, cudnn_batch_norm_23, cudnn_batch_norm_24, cudnn_batch_norm_25, cudnn_batch_norm_26]

assigned @titaiwangms

@justinchuby Do we still need this after the decom is fixed?

@justinchuby
Copy link
Collaborator Author

justinchuby commented Aug 4, 2024

nll_loss_forward

created with cross_entropy_loss. Needed because if decomposed there is a sum() step which may overflow for float16

Use NegativeLogLikelihoodLoss

@justinchuby
Copy link
Collaborator Author

  • aten.cudnn_batch_norm.default: No decompositions registered for the real-valued input. Example node: %cudnn_batch_norm : [num_users=1] = call_function[target=torch.ops.aten.cudnn_batch_norm.default](args = (%conv2d, %p_patch_embed_proj_0_1_weight, %p_patch_embed_proj_0_1_bias, %b_patch_embed_proj_0_1_running_mean, %b_patch_embed_proj_0_1_running_var, False, 0.1, 1e-05), kwargs = {}). All nodes: [cudnn_batch_norm, cudnn_batch_norm_1, cudnn_batch_norm_2, cudnn_batch_norm_3, cudnn_batch_norm_4, cudnn_batch_norm_5, cudnn_batch_norm_6, cudnn_batch_norm_7, cudnn_batch_norm_8, cudnn_batch_norm_9, cudnn_batch_norm_10, cudnn_batch_norm_11, cudnn_batch_norm_12, cudnn_batch_norm_13, cudnn_batch_norm_14, cudnn_batch_norm_15, cudnn_batch_norm_16, cudnn_batch_norm_17, cudnn_batch_norm_18, cudnn_batch_norm_19, cudnn_batch_norm_20, cudnn_batch_norm_21, cudnn_batch_norm_22, cudnn_batch_norm_23, cudnn_batch_norm_24, cudnn_batch_norm_25, cudnn_batch_norm_26]

assigned @titaiwangms

@justinchuby Do we still need this after the decom is fixed?

Maybe not. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contribution welcome We welcome code contributions for this topic: torch_lib Related to the torch/aten function lib in development
Projects
None yet
Development

No branches or pull requests

3 participants