Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance] TensorrtEP bad allocation #18887

Open
Liuqh12 opened this issue Dec 20, 2023 · 1 comment
Open

[Performance] TensorrtEP bad allocation #18887

Liuqh12 opened this issue Dec 20, 2023 · 1 comment
Labels
ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot

Comments

@Liuqh12
Copy link

Liuqh12 commented Dec 20, 2023

Describe the issue

This is my session code:

sess_options = rt.SessionOptions()
sess_options.enable_profiling = True
sess = rt.InferenceSession(model_name, sess_options=sess_options, providers=['TensorrtExecutionProvider'])

Erros as follow:

2023-12-20 15:48:33.5261320 [E:onnxruntime:, inference_session.cc:1799 onnxruntime::InferenceSession::Initialize::<lambda_197d3b7975b9bacd9690b0adb4064ca2>::operator ()] Exception during initialization: bad allocation
Traceback (most recent call last):
  File "d:\lqh12\mlort\run.py", line 24, in <module>
    sess = rt.InferenceSession(model_name, sess_options=sess_options, providers=['TensorrtExecutionProvider'])
  File "C:\Users\10-20\anaconda3\envs\ort\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\Users\10-20\anaconda3\envs\ort\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 463, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: bad allocation

Task Manager Snipping:
Screenshot 2023-12-20 160042

To reproduce

First: process follow onnxruntime infer sample: pre-processing

python -m onnxruntime.quantization.preprocess --input my_model.onnx --output my_model_p.onnx

Second: quantize_static

    quantize_static(
        'my_model_p.onnx',
        'nmi8.onnx',
        dr,
        quant_format=QuantFormat.QDQ,
        per_channel=False,
        weight_type=QuantType.QInt8
    )

Next: create session and ready to infer:

sess = rt.InferenceSession('nmi8.onnx', providers=['TensorrtExecutionProvider'])

install onnxruntime: pip install onnxruntime-gpu
Windows version got as:

cmd: winver
return: 22H2, 22621.2861, family

conda:

certifi            2023.11.17
charset-normalizer 3.3.2
coloredlogs        15.0.1
flatbuffers        23.5.26
graphsurgeon       0.4.6
humanfriendly      10.0
idna               3.6
mpmath             1.3.0
numpy              1.26.2
onnx               1.15.0
onnx-graphsurgeon  0.3.12
onnxruntime-gpu    1.16.3
opencv-python      4.8.1.78
packaging          23.2
Pillow             10.1.0
pip                23.3.1
polygraphy         0.49.0
protobuf           4.25.1
pyreadline3        3.4.1
requests           2.31.0
setuptools         68.0.0
sympy              1.12
tensorrt           8.6.0
torch              1.13.1
torchaudio         0.13.1
torchvision        0.14.1
typing_extensions  4.8.0
uff                0.6.9
urllib3            2.1.0
wheel              0.41.2

Urgency

No response

Platform

Windows

OS Version

win11

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.16.3

ONNX Runtime API

Python

Architecture

X64

Execution Provider

TensorRT

Execution Provider Library Version

CUDA 11.8, cudnn-windows-x86_64-8.9.7.29_cuda11-archive, TensorRT-8.6.0.12.for.cuda-11.8

Model File

onnx exported form gfpgan

Is this a quantized model?

Yes

@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform quantization issues related to quantization labels Dec 20, 2023
Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Jan 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

1 participant