Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model cannot be used because the inference status is rejected #19624

Open
orhan-akarsu opened this issue Feb 23, 2024 · 8 comments
Open

Model cannot be used because the inference status is rejected #19624

orhan-akarsu opened this issue Feb 23, 2024 · 8 comments
Labels
platform:web issues related to ONNX Runtime web; typically submitted using template stale issues that have not been addressed in a while; categorized by a bot

Comments

@orhan-akarsu
Copy link

orhan-akarsu commented Feb 23, 2024

Describe the issue

I have the yolov8 nano onnx model. I want to use this model on the web. I made the necessary improvements regarding this. It works as desired on the web with React. It does not work in the Cromium-based React Windows application.

image
As you can see I am using this code. This way it works correctly in the browser. But in windows applitaction :
image

In this way, the model cannot be loaded successfully because the status is rejected. When I look at the network through the Windows application, it does not download the necessary WASM file.

Upgrading the onnxweb version used from 1.14 to 1.17 did not solve the problem. No matter what I do the problem persists.

To reproduce

Urgency

I need to deliver the project within 1 week. I'm already past the deadline.

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.17.0

Execution Provider

'wasm'/'cpu' (WebAssembly CPU)

@orhan-akarsu orhan-akarsu added the platform:web issues related to ONNX Runtime web; typically submitted using template label Feb 23, 2024
@skottmckay
Copy link
Contributor

Isn't the 'create' method part of InferenceSessionFactory not InferenceSession?

export interface InferenceSessionFactory {
// #region create()
/**
* Create a new inference session and load model asynchronously from an ONNX model file.
*
* @param uri - The URI or file path of the model to load.
* @param options - specify configuration for creating a new inference session.
* @returns A promise that resolves to an InferenceSession object.
*/
create(uri: string, options?: InferenceSession.SessionOptions): Promise<InferenceSession>;
/**
* Create a new inference session and load model asynchronously from an array bufer.
*
* @param buffer - An ArrayBuffer representation of an ONNX model.
* @param options - specify configuration for creating a new inference session.
* @returns A promise that resolves to an InferenceSession object.
*/
create(buffer: ArrayBufferLike, options?: InferenceSession.SessionOptions): Promise<InferenceSession>;
/**
* Create a new inference session and load model asynchronously from segment of an array bufer.
*
* @param buffer - An ArrayBuffer representation of an ONNX model.
* @param byteOffset - The beginning of the specified portion of the array buffer.
* @param byteLength - The length in bytes of the array buffer.
* @param options - specify configuration for creating a new inference session.
* @returns A promise that resolves to an InferenceSession object.
*/
create(buffer: ArrayBufferLike, byteOffset: number, byteLength?: number, options?: InferenceSession.SessionOptions):
Promise<InferenceSession>;
/**
* Create a new inference session and load model asynchronously from a Uint8Array.
*
* @param buffer - A Uint8Array representation of an ONNX model.
* @param options - specify configuration for creating a new inference session.
* @returns A promise that resolves to an InferenceSession object.
*/
create(buffer: Uint8Array, options?: InferenceSession.SessionOptions): Promise<InferenceSession>;
// #endregion
}

@orhan-akarsu
Copy link
Author

orhan-akarsu commented Feb 24, 2024

@skottmckay The usage patterns are the same, but I didn't see any inheritance. When I look at the ort.min.js file using the https://sokra.github.io/source-map-visualization/ tool:
image
I don't see any inheritance.

@skottmckay
Copy link
Contributor

Not following what you mean by the 'usage patterns' are the same. I'm not an expert on the JS API but I don't see a create method on the InferenceSession class so your code seems wrong. i.e. the error is not that it can't load the model it's that you're calling something that doesn't exist AFAICS. Same as if you called Inference.doMagic(...) to load the model, unless something adds a create method to InferenceSession outside of the definition of that class (maybe that's possible in JS).

What happens if you change InferenceSession.create to InferenceSessionFactory.create?

export interface InferenceSession {
// #region run()
/**
* Execute the model asynchronously with the given feeds and options.
*
* @param feeds - Representation of the model input. See type description of `InferenceSession.InputType` for detail.
* @param options - Optional. A set of options that controls the behavior of model inference.
* @returns A promise that resolves to a map, which uses output names as keys and OnnxValue as corresponding values.
*/
run(feeds: InferenceSession.FeedsType, options?: InferenceSession.RunOptions): Promise<InferenceSession.ReturnType>;
/**
* Execute the model asynchronously with the given feeds, fetches and options.
*
* @param feeds - Representation of the model input. See type description of `InferenceSession.InputType` for detail.
* @param fetches - Representation of the model output. See type description of `InferenceSession.OutputType` for
* detail.
* @param options - Optional. A set of options that controls the behavior of model inference.
* @returns A promise that resolves to a map, which uses output names as keys and OnnxValue as corresponding values.
*/
run(feeds: InferenceSession.FeedsType, fetches: InferenceSession.FetchesType,
options?: InferenceSession.RunOptions): Promise<InferenceSession.ReturnType>;
// #endregion
// #region release()
/**
* Release the inference session and the underlying resources.
*/
release(): Promise<void>;
// #endregion
// #region profiling
/**
* Start profiling.
*/
startProfiling(): void;
/**
* End profiling.
*/
endProfiling(): void;
// #endregion
// #region metadata
/**
* Get input names of the loaded model.
*/
readonly inputNames: readonly string[];
/**
* Get output names of the loaded model.
*/
readonly outputNames: readonly string[];
// /**
// * Get input metadata of the loaded model.
// */
// readonly inputMetadata: ReadonlyArray<Readonly<InferenceSession.ValueMetadata>>;
// /**
// * Get output metadata of the loaded model.
// */
// readonly outputMetadata: ReadonlyArray<Readonly<InferenceSession.ValueMetadata>>;
// #endregion
}

@orhan-akarsu
Copy link
Author

orhan-akarsu commented Feb 25, 2024

@skottmckay When I used InferenceSession, it was working on the web, but not in the React Windows application.
Web:
image
Windows app :
image

The codes are the same on the web and in the application because they are received from the server.

but if use InferenceSessionFactory :
image

@skottmckay
Copy link
Contributor

@fs-eire might be able to help

@orhan-akarsu
Copy link
Author

@skottmckay
Error: no available backend found. ERR: [cpu] TypeError: v.dirname is not a function
at Ln (backend-impl.ts:101:9)
at async a.create (inference-session-impl.ts:201:21)
at async Crate.__loadModels (crate2.js?v1708949341097:71:17)
at async Crate.initialize (crate2.js?v1708949341097:49:7)
at async mainService (workerNiz.js?v${timestamp}:25:7)

I encountered this as an error when I did not use the Promise structure.

@fs-eire
Copy link
Contributor

fs-eire commented Feb 26, 2024

I think the usage of Promise has no problem. I need more information (error messages or stacktrace or whatever) to know why it does not work on windows app. just knowing that a Promise is rejected does not give me enough info to understand the root cause.

Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Mar 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:web issues related to ONNX Runtime web; typically submitted using template stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

3 participants