Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

converted onnx model takes much more load time and memory size #18975

Closed
NofAssiri opened this issue Jan 2, 2024 · 1 comment
Closed

converted onnx model takes much more load time and memory size #18975

NofAssiri opened this issue Jan 2, 2024 · 1 comment
Labels
platform:windows issues related to the Windows platform

Comments

@NofAssiri
Copy link

NofAssiri commented Jan 2, 2024

Describe the issue

i have ml.net
model of size 47MB (zipped)
loads in 1 second
takes about 100, 150MB of memory,
but when i convert it to onnx format using ConvertToOnnx function and empty list of input data type,
now
the model size 112MB,
loads in 6 seconds with onnxruntime versions 1.15.1 or less but when using any a newer version of onnxruntime it takes about
9 minutes to load.
memory taken (RAM) : 1,500MB then decrease to 700MB.

Notes: tried using python optimizers and simplifiers libraries the model size increased from 112MB to 150MB, load time and memory size the same i even removed some unused inputs and outputs.

To reproduce

conversion code
var ctx =new MLContext();
var transformer = ctx.Model.Load("C:\models\mlModel.zip", var out schema);
var dummy = new List();
var dummyData = _ctx.Data.LoadFromEnumerable(dummy);
using (var onnx = File.Open("C:\models\onnModel.onnx", FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
_ctx.Model.ConvertToOnnx(transformer, dummyData, onnx);
}

load code
var ctx =new MLContext();
var model = ctx.transform.ApplyOnnxModel("C:\models\onnxModel.onnx");

Urgency

yes it is urgent.

Platform

Windows

OS Version

11 pro

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

tried from 1.10 to 1.16 versions

ONNX Runtime API

C#

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

Model File

sorry but i cannot give it.

Is this a quantized model?

No

@github-actions github-actions bot added the platform:windows issues related to the Windows platform label Jan 2, 2024
@wschin
Copy link
Contributor

wschin commented Jan 3, 2024

ML.NET is not a part of onnxruntime. For ML.NET problem, please open issue at https://github.com/dotnet/machinelearning. If you just want to run ONNX model with C#, please refer to onnxruntime's C# document https://onnxruntime.ai/docs/get-started/with-csharp.html. I will close this issue since it's unrelated with this repo. Please feel free to repoen it if you see the same problem with onnxruntime.

@wschin wschin closed this as completed Jan 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:windows issues related to the Windows platform
Projects
None yet
Development

No branches or pull requests

2 participants