Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[js/webgpu] Support where #17544

Merged
merged 13 commits into from
Oct 3, 2023
Merged

[js/webgpu] Support where #17544

merged 13 commits into from
Oct 3, 2023

Conversation

axinging
Copy link
Contributor

Supported type: float. int32_t, uint32_t, bool.
Case where_broadcast.jsonc is not enabled due to #17405.

Description

Motivation and Context

@axinging
Copy link
Contributor Author

@qjia7 @xhcao @hujiajie @gyagp PTAL

@guschmue guschmue added the ep:WebGPU ort-web webgpu provider label Sep 15, 2023
@axinging axinging marked this pull request as draft September 21, 2023 03:01
@axinging axinging force-pushed the webgpu_where branch 2 times, most recently from 758b963 to cdb5a52 Compare September 21, 2023 07:41
@axinging axinging marked this pull request as ready for review September 21, 2023 07:44
@axinging
Copy link
Contributor Author

@qjia7, @fs-eire Thanks for your great comment, please take another look.

@fs-eire
Copy link
Contributor

fs-eire commented Sep 22, 2023

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Sep 22, 2023

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

1 similar comment
@azure-pipelines
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@fs-eire
Copy link
Contributor

fs-eire commented Sep 25, 2023

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Sep 25, 2023

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

1 similar comment
@azure-pipelines
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@fs-eire
Copy link
Contributor

fs-eire commented Sep 25, 2023

Is the broadcast helper capable to be used in binary operators?

@axinging
Copy link
Contributor Author

Is the broadcast helper capable to be used in binary operators?
Yes, I am going to to this in another pr.

@fs-eire
Copy link
Contributor

fs-eire commented Sep 25, 2023

There are some concerns for the BroadcastHelper.

The design of BroadcastHelper uses implicit conventions instead of parameters. For example, a BroadcastHelper allows user to pass names of the 3 inputs, but assume their first letter is different. The design also assume that users know what contents are generated by broadcastIndicesToOffset() - the "broadcastIndicesToOffset" + (uppercase of input's first letter)

There is only one usage of BroadcastHelper. The code created an instance of BroadcastHelper only to call broadcastIndicesToOffset() immediately.

The current IndicesHelper uses a different way. It always requires user to call functions to generate corresponding code snippet. If using the same(similar) design, BroadcastHelper may need:

  • an impl() function or impl property that outputs the WGSL code snippet for the function implementations
  • an broadcastIndicesToOffset(indicesHelper) function that generates the WGSL code snippet for an u32 expression representing the offset.

there are pros and cons for different designs and no design is best at all aspects. a big benefit of the existing IndicesHelper is that it hides the implementation details and allows possible future changes to be done in a minimal cost (refactor friendly). the downside is maybe the source code looks a little bit awkward, or at least not as straightforward as the other way. However, considering it's important to keep one style in a code base, it is recommended to align the design to one way, which is as suggested above.

Supported type: float. int32_t, uint32_t, bool.
Case where_broadcast.jsonc is not enabled due to microsoft#17405.
@fs-eire
Copy link
Contributor

fs-eire commented Sep 27, 2023

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Sep 27, 2023

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

1 similar comment
@azure-pipelines
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@fs-eire
Copy link
Contributor

fs-eire commented Sep 27, 2023

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Sep 27, 2023

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

1 similar comment
@azure-pipelines
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@fs-eire
Copy link
Contributor

fs-eire commented Sep 30, 2023

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Sep 30, 2023

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

1 similar comment
@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

fs-eire
fs-eire previously approved these changes Sep 30, 2023
@fs-eire
Copy link
Contributor

fs-eire commented Oct 2, 2023

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Oct 2, 2023

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

1 similar comment
@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@fs-eire fs-eire merged commit 992f3e4 into microsoft:main Oct 3, 2023
kleiti pushed a commit to kleiti/onnxruntime that referenced this pull request Mar 22, 2024
Supported type: float. int32_t, uint32_t, bool.
Case where_broadcast.jsonc is not enabled due to
microsoft#17405.

### Description
<!-- Describe your changes. -->



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

---------

Co-authored-by: Yulong Wang <[email protected]>
siweic0 pushed a commit to siweic0/onnxruntime-web that referenced this pull request May 9, 2024
Supported type: float. int32_t, uint32_t, bool.
Case where_broadcast.jsonc is not enabled due to
microsoft#17405.

### Description
<!-- Describe your changes. -->



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

---------

Co-authored-by: Yulong Wang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:WebGPU ort-web webgpu provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants