Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Image Classification] Failed to execute 'dispatch' on 'MLContext': Invalid inputs #46

Closed
ibelem opened this issue Oct 18, 2024 · 8 comments · Fixed by #55
Closed

[Image Classification] Failed to execute 'dispatch' on 'MLContext': Invalid inputs #46

ibelem opened this issue Oct 18, 2024 · 8 comments · Fixed by #55

Comments

@ibelem
Copy link
Contributor

ibelem commented Oct 18, 2024

Failed to execute 'dispatch' on 'MLContext' after landing #45

Repro Steps A

  1. Enable WebNN in browser
  2. Visit https://microsoft.github.io/webnn-developer-preview/demos/image-classification/?provider=webnn&devicetype=gpu&model=mobilenet-v2&run=1
  3. Click "Classify" button
  4. It will show performance results, click "Classify" button again

Repro Steps B

  1. Enable WebNN in browser
  2. Visit https://microsoft.github.io/webnn-developer-preview/demos/image-classification/?provider=webnn&devicetype=gpu&model=mobilenet-v2&run=1
  3. Click "Classify" button
  4. It will show performance results, click "ResNet50" button
  5. Click "Classify" button

Actual Result

Failed to execute 'dispatch' on 'MLContext': Invalid inputs: The context of MLGraph doesn't match the context of the MLTensor with name "pixel_values".

Image

@egalli do you have any clue to fix this issue, thanks!

CC @Honry @fdwr @huningxin

Note

This issue didn't occur by using compute() in ORT Web before.

How to build the Transformers.js dists and apply for developer preview image classification demo

  1. Clone https://github.com/xenova/transformers.js/tree/v3 // "onnxruntime-web": "1.20.0-dev.20241016-2b8fc5529b",
  2. npm install
  3. Apply the patch https://github.com/ibelem/transformers.js/blob/v3-webnn-2/0001-v3-perf-code.patch or follow the code change in ibelem/transformers.js@7cb760f
  4. npm run build
  5. Copy the dists from /transformers.js/dist to webnn-developer-preview/assets/dist_transformers/dynamic-runs-1.19-dev/
egalli added a commit to egalli/onnxruntime that referenced this issue Oct 19, 2024
### Description
This change adds a cache of `MLContext`s keyed by their options to the `WebNNBackend`. This makes is so that multiple `InferenceSession`s create with the same options will share the same context.

### Motivation and Context
Since `MLTensor`s are tied `MLContext`s, developer can't easily share tensors between `InferenceSession` (outside of manually an `MLContext` and specifying the `context` options). This leads strange behaviors such as,
```js
const sessionsA = ort.InferenceSession.create(urlA, {
  executionProviders: ["webnn"],
  preferredOutputLocation: "ml-buffer",
});
const sessionsB = ort.InferenceSession.create(urlB, {
  executionProviders: ["webnn"],
});
const temp = await sessionA.run({/* arguments */});
const result = await sessionB.run({"input":temp["output"]}); // ERROR: Failed to execute 'dispatch' on 'MLContext': Invalid inputs: The context of MLGraph doesn't match the context of the MLTensor with name "input".
```
We encountered this behavior when updating the transformers.js version in the developer preview demos. microsoft/webnn-developer-preview#46
@egalli
Copy link

egalli commented Oct 19, 2024

It seems to be related to how we created a new MLContext per sessions. I have created a PR that adds a cache for MLContexts so that InferencesSessions created with the same parameters will use the same MLContext.

FYI, while testing change to the ORT, I noticed that transformers.js defaults to downloading the .wasm file from a CDN. If we want to use the version this repo, we'll need to set that variable to

transformers.env.backends.onnx.wasm.wasmPaths = "../../assets/dist_transformers/dynamic-runs-1.20-dev/";

@ibelem
Copy link
Contributor Author

ibelem commented Oct 21, 2024

Thank you @egalli , I will re-build transformers.js dist once merging the microsoft/onnxruntime#22510 , also set transformers.env.backends.onnx.wasm.wasmPaths.

fdwr pushed a commit to microsoft/onnxruntime that referenced this issue Oct 30, 2024
### Description
This change adds a cache of `MLContext`s keyed by their options to the
`WebNNBackend`. This makes is so that multiple `InferenceSession`s
create with the same options will share the same context.

### Motivation and Context
Since `MLTensor`s are tied `MLContext`s, developer can't easily share
tensors between `InferenceSession` (outside of manually an `MLContext`
and specifying the `context` options). This leads strange behaviors such
as,
```js
const sessionsA = ort.InferenceSession.create(urlA, {
  executionProviders: ["webnn"],
  preferredOutputLocation: "ml-buffer",
});
const sessionsB = ort.InferenceSession.create(urlB, {
  executionProviders: ["webnn"],
});
const temp = await sessionA.run({/* arguments */});
const result = await sessionB.run({"input":temp["output"]}); // ERROR: Failed to execute 'dispatch' on 'MLContext': Invalid inputs: The context of MLGraph doesn't match the context of the MLTensor with name "input".
```
We encountered this behavior when updating the transformers.js version
in the developer preview demos. microsoft/webnn-developer-preview#46
@fdwr
Copy link
Collaborator

fdwr commented Oct 30, 2024

Thank you @egalli , I will re-build transformers.js dist once merging the microsoft/onnxruntime#22510 , also set transformers.env.backends.onnx.wasm.wasmPaths.

@ibelem - Tis merged.

@ibelem
Copy link
Contributor Author

ibelem commented Oct 31, 2024

Hold the dists update since the microsoft/onnxruntime#22278 and microsoft/onnxruntime#22556 cause demos regression.

@fdwr
Copy link
Collaborator

fdwr commented Nov 1, 2024

Hold the dists update since the microsoft/onnxruntime#22278 and microsoft/onnxruntime#22556 cause demos regression.

Merged Revert "[WebNN] Fallback the node when its output doesn't have shape info" https://github.com/microsoft/onnxruntime/pull/22669.

@ibelem
Copy link
Contributor Author

ibelem commented Nov 5, 2024

@fdwr

We need to wait for PRs below to be merged and integrated into the new development version of ORT distributions, otherwise more models may fail to run.

@fdwr
Copy link
Collaborator

fdwr commented Nov 5, 2024

@fdwr

We need to wait for PRs below to be merged and integrated into the new development version of ORT distributions, otherwise more models may fail to run.

@ibelem - I hadn't seen this one (not tagged). Thanks - looking at.

@ibelem
Copy link
Contributor Author

ibelem commented Nov 7, 2024

microsoft/onnxruntime#22701 Fix issues with MLTensor caching was landed, waiting for next dev version of ONNX ORT dist later than 1.21.0-dev.20241106 @https://www.npmjs.com/package/onnxruntime-web?activeTab=versions .

@fdwr fdwr closed this as completed in #55 Nov 11, 2024
ishwar-raut1 pushed a commit to ishwar-raut1/onnxruntime that referenced this issue Nov 19, 2024
…t#22510)

### Description
This change adds a cache of `MLContext`s keyed by their options to the
`WebNNBackend`. This makes is so that multiple `InferenceSession`s
create with the same options will share the same context.

### Motivation and Context
Since `MLTensor`s are tied `MLContext`s, developer can't easily share
tensors between `InferenceSession` (outside of manually an `MLContext`
and specifying the `context` options). This leads strange behaviors such
as,
```js
const sessionsA = ort.InferenceSession.create(urlA, {
  executionProviders: ["webnn"],
  preferredOutputLocation: "ml-buffer",
});
const sessionsB = ort.InferenceSession.create(urlB, {
  executionProviders: ["webnn"],
});
const temp = await sessionA.run({/* arguments */});
const result = await sessionB.run({"input":temp["output"]}); // ERROR: Failed to execute 'dispatch' on 'MLContext': Invalid inputs: The context of MLGraph doesn't match the context of the MLTensor with name "input".
```
We encountered this behavior when updating the transformers.js version
in the developer preview demos. microsoft/webnn-developer-preview#46
ankitm3k pushed a commit to intel/onnxruntime that referenced this issue Dec 11, 2024
…t#22510)

### Description
This change adds a cache of `MLContext`s keyed by their options to the
`WebNNBackend`. This makes is so that multiple `InferenceSession`s
create with the same options will share the same context.

### Motivation and Context
Since `MLTensor`s are tied `MLContext`s, developer can't easily share
tensors between `InferenceSession` (outside of manually an `MLContext`
and specifying the `context` options). This leads strange behaviors such
as,
```js
const sessionsA = ort.InferenceSession.create(urlA, {
  executionProviders: ["webnn"],
  preferredOutputLocation: "ml-buffer",
});
const sessionsB = ort.InferenceSession.create(urlB, {
  executionProviders: ["webnn"],
});
const temp = await sessionA.run({/* arguments */});
const result = await sessionB.run({"input":temp["output"]}); // ERROR: Failed to execute 'dispatch' on 'MLContext': Invalid inputs: The context of MLGraph doesn't match the context of the MLTensor with name "input".
```
We encountered this behavior when updating the transformers.js version
in the developer preview demos. microsoft/webnn-developer-preview#46
ankitm3k pushed a commit to intel/onnxruntime that referenced this issue Dec 11, 2024
…t#22510)

### Description
This change adds a cache of `MLContext`s keyed by their options to the
`WebNNBackend`. This makes is so that multiple `InferenceSession`s
create with the same options will share the same context.

### Motivation and Context
Since `MLTensor`s are tied `MLContext`s, developer can't easily share
tensors between `InferenceSession` (outside of manually an `MLContext`
and specifying the `context` options). This leads strange behaviors such
as,
```js
const sessionsA = ort.InferenceSession.create(urlA, {
  executionProviders: ["webnn"],
  preferredOutputLocation: "ml-buffer",
});
const sessionsB = ort.InferenceSession.create(urlB, {
  executionProviders: ["webnn"],
});
const temp = await sessionA.run({/* arguments */});
const result = await sessionB.run({"input":temp["output"]}); // ERROR: Failed to execute 'dispatch' on 'MLContext': Invalid inputs: The context of MLGraph doesn't match the context of the MLTensor with name "input".
```
We encountered this behavior when updating the transformers.js version
in the developer preview demos. microsoft/webnn-developer-preview#46
ankitm3k pushed a commit to intel/onnxruntime that referenced this issue Dec 11, 2024
…t#22510)

### Description
This change adds a cache of `MLContext`s keyed by their options to the
`WebNNBackend`. This makes is so that multiple `InferenceSession`s
create with the same options will share the same context.

### Motivation and Context
Since `MLTensor`s are tied `MLContext`s, developer can't easily share
tensors between `InferenceSession` (outside of manually an `MLContext`
and specifying the `context` options). This leads strange behaviors such
as,
```js
const sessionsA = ort.InferenceSession.create(urlA, {
  executionProviders: ["webnn"],
  preferredOutputLocation: "ml-buffer",
});
const sessionsB = ort.InferenceSession.create(urlB, {
  executionProviders: ["webnn"],
});
const temp = await sessionA.run({/* arguments */});
const result = await sessionB.run({"input":temp["output"]}); // ERROR: Failed to execute 'dispatch' on 'MLContext': Invalid inputs: The context of MLGraph doesn't match the context of the MLTensor with name "input".
```
We encountered this behavior when updating the transformers.js version
in the developer preview demos. microsoft/webnn-developer-preview#46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants