-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mismatch in results for TensorRT session and cuda Session #20986
Comments
I haven't used polygraphy before, but it looks to me like the comparisons aren't exactly apples to apples here? |
Thanks for replying. Let me explain this in a bit more detail. Polygraphy uses its own CPU ONNXRuntime session to compare the output with TensorRT. The shape we provide for the input will also be used to generate the same random input for the CPU ONNXRuntime, and this is passing the test. We can provide any input shape we want. When I use ONNXRuntime for creating the TensorRT session, it did not pass the test when comparing the output with the ONNXRuntime CPU session. The same goes for using the ONNXRuntime CUDA session compared to the CPU ONNXRuntime session; the test did not pass. I think some optimization is happening while the ONNX model is running on the GPU, which drops the accuracy. I also tested this with Polygraphy. I changed the ONNXRuntime session from the CPU provider to the GPU provider in Polygraphy and compared it with Polygraphy's version of TensorRT. The results failed while they passed when I used the CPU provider for comparison. Additionally, I provided the Dynamic Shape Profiling while converting the model using ONNXRuntime but did not get any benefit. I also compared the output with Torch. The most similar results were between the ONNXRuntime CPU session and Polygraphy's version of TensorRT. Attached are the benchmarks I did. |
thanks for the explanation. I'm going to reopen this. |
Hi @jywu-msft, any update on this? |
@akmalmasud96 I tried the repro script and it passed (comparison_result True for all 4 dimensions). Since the atol and rtol values are set to 1e-05, this is the expected behavior as far as I understand. Please confirm. |
Describe the issue
I am creating onnxruntime session using tensorRT. While evaluating the model's output, The tolerance levels (atol and rtol) both could be passed at most at the value of 1e-3, when compared for cudaSession, cpuSession and tensorRT session.
However, When I tested it with Nvidia's polygraphy tool. For the atol and rtol values, they are getting passed for 1e-5, both.
To reproduce
Following code is used for the inference
The model can be downloaded from : https://huggingface.co/Wespeaker/wespeaker-voxceleb-resnet293-LM/blob/main/voxceleb_resnet293_LM.onnx
The polygraphy command is as follows
polygraphy run onnx_test_model/voxceleb_resnet293_LM.onnx --trt --onnxrt --atol 1e-5 --rtol 1e-5 --input-shapes feats:[1,800,80]
I am using the docker image : nvcr.io/nvidia/tensorrt:24.05-py3
Urgency
I need to complete this, as early as possible. I have deadline for this till friday.
Platform
Linux
OS Version
Ubuntu 22.04.4 LTS
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.18
ONNX Runtime API
Python
Architecture
X86
Execution Provider
Default CPU, CUDA, TensorRT
Execution Provider Library Version
cuda_12.4.r12.4, TensorRT-10.0.1.6
The text was updated successfully, but these errors were encountered: