You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Error: Error: no available backend found. ERR: [wasm] RuntimeError: Aborted(both async and sync fetching of the wasm failed). Build with -sASSERTIONS for more info.
I already confirmed and checked that my path is correctly loading for "ort-wasm.wasm" file but still not able to encounter why this error is coming. Although when i am trying to load and run model in "https://github.com/microsoft/onnxruntime-nextjs-template", then it is correctly working with "wasm" as backend. I changed all dependencies and next.config in my project as of onnxruntime template but it makes no difference.
To reproduce
Loading model in modelHelper.ts as:
Method1: import * as ort from "onnxruntime-web"; const session = await ort.InferenceSession.create("./_next/static/chunks/pages/modelname.onnx",executionProviders:["wasm"], graphOptimizationLevel: "all" });
Method 2: import { InferenceSession } from 'onnxruntime-web' const modelUrl = './modelname.onnx' const session= await InferenceSession.create(modelUrl, { executionProviders: ['wasm'], })
Next Config file:
`/** @type {import('next').NextConfig} */
const NodePolyfillPlugin = require("node-polyfill-webpack-plugin");
const CopyPlugin = require("copy-webpack-plugin");
Describe the issue
Error:
Error: no available backend found. ERR: [wasm] RuntimeError: Aborted(both async and sync fetching of the wasm failed). Build with -sASSERTIONS for more info.
I already confirmed and checked that my path is correctly loading for "ort-wasm.wasm" file but still not able to encounter why this error is coming. Although when i am trying to load and run model in "https://github.com/microsoft/onnxruntime-nextjs-template", then it is correctly working with "wasm" as backend. I changed all dependencies and next.config in my project as of onnxruntime template but it makes no difference.
To reproduce
Loading model in modelHelper.ts as:
Method1:
import * as ort from "onnxruntime-web"; const session = await ort.InferenceSession.create("./_next/static/chunks/pages/modelname.onnx",executionProviders:["wasm"], graphOptimizationLevel: "all" });
Method 2:
import { InferenceSession } from 'onnxruntime-web' const modelUrl = './modelname.onnx' const session= await InferenceSession.create(modelUrl, { executionProviders: ['wasm'], })
Next Config file:
`/** @type {import('next').NextConfig} */
const NodePolyfillPlugin = require("node-polyfill-webpack-plugin");
const CopyPlugin = require("copy-webpack-plugin");
module.exports = {
reactStrictMode: true,
//distDir: 'build',
webpack: (config, { }) => {
}
}`
Urgency
This project is for a client and need to submit before 10/11/2013
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
^1.16.1
Execution Provider
'wasm'/'cpu' (WebAssembly CPU)
The text was updated successfully, but these errors were encountered: