-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Web] executionProviders
chain for webnn
fallback does not work on init error
#20729
Comments
@Honry Given that the fallback to the next EP works when the WebNN API is not available at all in the browser, would it be consistent to treat this error case (of not being able to initialize the WebNN backend) as a fallback too? |
At the current stage, you can try Edge browser dev channel that supports WebNN on Windows 10. |
@fdwr, following's the code snippet where it checks the availability of WebNN API. I drafted a PR to attempt to check the creation of WebNN if (epName === 'webnn') {
// perform WebNN availability check
if (typeof navigator === 'undefined' || !(navigator as unknown as {ml: unknown}).ml) {
throw new Error('WebNN is not supported in current environment');
}
await initJsep('webnn', getInstance(), env);
} @fs-eire, could you pls. take a look at the PR? Do you have any other good idea? Thanks! |
Considering recent issues/PRs about WebNN, here is a summary of requirements:
It is not straightforward to implement those features. The major reason is because the existing design was based on some old assumptions that are no longer true for the new scenarios:
How about we let ORT just accept an instance of |
I for one would very much like that. Otherwise the checks would be redundant (one throwaway init for checking available devices). TBH I don't think this approach is also all that good, but at least we get control of the init process somewhat. Maybe the supported device type listing can be a part of core WebNN API. |
Thanks @fs-eire, good proposal, thus we can get global MLContext from ort.env.webnn.context for webnn session, as well as make it sharable context for I/O binding purpose. While some more concerns, users have to learn how to create WebNN MLContext, and if they create different sessions with different WebNN MLContext, they have to reset the @huningxin, @egalli, what's your opinion? |
There may be a difference between the proposed
This is true - no matter whether we use navigator.ml.createContext(options) |
This is a valid scenario. For example, developers may want to run encoder on GPU MLContext and run decoder on NPU MLContext. |
Then I would prefer to just let user pass their MLContext instance in session options. By doing this we don't need to offer a way to get the context, because users created them and should have the reference themselves. |
Is there a way to pass MLContext instance as sessionOptions to the WebNN EP (C++)? Looks like we need additional Wasm Module for exposing MLContext JS instance to WebNN EP. |
If we pass the MLContext thru a session options, my understand is that we'll need to a way store the the MLContexts in JS, pass an id to to the ORT, and have the WebNN EP ask for MLContext using the id (since OrtAddSessionConfigEntry only supports strings). Does this sound correct? |
That's true. However I suppose the concern is this is a breaking change. Could we make MLContext as an option for advanced usages (I/O binding, WebGPU interop etc.,) while keep the the existing options? If MLContext option is present, other options are ignored. |
Yes, I think this is a good idea. This allows to preserve backward compatibility. |
That sounds correct to me. I can only think of one easier way to do that: if we restrict that at the same time only one session is being initialized, we can just use the "currentContext" instead of a map. Then all other parts should be similar (using the |
Thanks @fs-eire, @huningxin, good idea, thus @egalli could continue his PR. Now only one remaining issue, my concern is passing specific webnn options to backend initialization does not make much sense.
|
If we need to pass the MLContext from JS to C++ anyway, it is no need to pass the options to C++. |
Even if we pass the MLContext, we would still need to pass the deviceType to the C++ code as it is used to select between NCHW and NHWC.
It would be nice if WebNN provided a way to query the preferred layout from the MLContext. |
So currently there is no way to do that? does it mean if we want to let users to pass the MLContext, we still also need to pass the device? |
WebNN spec supports both layouts. This workaround is for a limitation of previous Chromium WebNN XNNPACK implementation for CPU. Now Chromium is using TFLite implementation for CPU and supporting NCHW. This workaround will not be necessary anymore once that is fixed.
There is an ongoing discussion in Working Group about Allow checking whether operators/types are supported for a backend before creating a graph. I think querying the preferred layout is a good use case. Feel free to chime in and share your inputs. |
As we talked, we wanted to keep current options and add additional option for MLContext, if user uses current options, this is still an issue. e.g. With current options as below, if creating WebNN GPU failed, it will throw from WebNN EP (C++) and will not fallback to the webgpu ep. If we need to check this early, we have to pass WebNN options to backend initialization. const mySession = await ort.InferenceSession.create("./model.onnx", {
executionProviders: [
{
name: "webnn",
deviceType: "gpu",
},
"webgpu",
],
}); |
Yes, but the implementation can be different. Actually we can do this in JS:
This should not cause the C++ exception. EDIT: I had a comment in #20600 to suggest to create MLContext in C++, but that is before this issue is created. Now I think it is better to create it in JS (or let user create it, if using new interface) so that we can avoid the problem |
Thus we still need to pass WebNN options to the backend initialization. And this depends on WebNN spec to expose device type to corresponding MLContext, as inside WebNN EP, we need device type to filter the supported ops, data type, layout etc, which are different supported status among device types at current stage. |
I understand the part that C++ code need the WebNN options. The one thing that I don't understand is: if a MLContext instance is already created, is it still possible that the WebNN EP may fail the EP initialization? |
No, it isn't. But it will throw in where it creates the MLContext failed with current WebNN options, then the session creation will be rejected. Just as @sansmoraxz encountered in his use case. If we check context creation early in if (epName === 'webnn') {
// perform WebNN availability check
if (typeof navigator === 'undefined' || !(navigator as unknown as {ml: unknown}).ml) {
throw new Error('WebNN is not supported in current environment');
}
await initJsep('webnn', getInstance(), env);
} |
I think before WebNN allows getting the options from a If MLContext is not specified, they are used to create the MLContext (or fail if not available). |
I don't think this is a clean solution. A better solution would be exposing properties for getting those metadata from the MLContext object, which may require a spec review process. The reason is not only because the options are redundant info (because the process of creating MLContext implicitly include those info), but also may cause inconsistency. User may use 'cpu' to create the MLContext but passing to session options with that MLContext and For now, we can ask users to pass those info in session options and eventually deprecate those options and just let passing MLContext only, which may take several versions after the spec allows it (if this ever happens). |
With all the discussions above, I created #20816, which tries to make an API update for |
Agreed, I think MLContext should expose |
### Description This PR is an API-only change to address the requirements being discussed in #20729. There are multiple ways that users may create an ORT session by specifying the session options differently. All the code snippet below will use the variable `webnnOptions` as this: ```js const myWebnnSession = await ort.InferenceSession.create('./model.onnx', { executionProviders: [ webnnOptions ] }); ``` ### The old way (backward-compatibility) ```js // all-default, name only const webnnOptions_0 = 'webnn'; // all-default, properties omitted const webnnOptions_1 = { name: 'webnn' }; // partial const webnnOptions_2 = { name: 'webnn', deviceType: 'cpu' }; // full const webnnOptions_3 = { name: 'webnn', deviceType: 'gpu', numThreads: 1, powerPreference: 'high-performance' }; ``` ### The new way (specify with MLContext) ```js // options to create MLcontext const options = { deviceType: 'gpu', powerPreference: 'high-performance' }; const myMlContext = await navigator.ml.createContext(options); // options for session options const webnnOptions = { name: 'webnn', context: myMlContext, ...options }; ``` This should throw (because no deviceType is specified): ```js const myMlContext = await navigator.ml.createContext({ ... }); const webnnOptions = { name: 'webnn', context: myMlContext }; ``` ### Interop with WebGPU ```js // get WebGPU device const adaptor = await navigator.gpu.requestAdapter({ ... }); const device = await adaptor.requestDevice({ ... }); // set WebGPU adaptor and device ort.env.webgpu.adaptor = adaptor; ort.env.webgpu.device = device; const myMlContext = await navigator.ml.createContext(device); const webnnOptions = { name: 'webnn', context: myMlContext, gpuDevice: device }; ``` This should throw (because cannot specify both gpu device and MLContext option at the same time): ```js const webnnOptions = { name: 'webnn', context: myMlContext, gpuDevice: device, deviceType: 'gpu' }; ```
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details. |
Describe the issue
In case of any error during init with
webnn
the execution provider blocks and does not fallback to the next provider in chain. For example, DirectML API init in windows 10 with GPU initialization will not fallback to others.This only occurs if the WebNN API is enabled in the browser flags.
To reproduce
Try in Windows 10 with a browser with the API enabled with following code fragment:
Results in error:
The fallback works if the webnn api is not available.
Urgency
NA. Just doing some PoCs.
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.18.0
Execution Provider
Other / Unknown
The text was updated successfully, but these errors were encountered: