Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mismatch in results for TensorRT session and cuda Session #20986

Open
akmalmasud96 opened this issue Jun 10, 2024 · 5 comments
Open

Mismatch in results for TensorRT session and cuda Session #20986

akmalmasud96 opened this issue Jun 10, 2024 · 5 comments
Assignees
Labels
ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider model:transformer issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.

Comments

@akmalmasud96
Copy link

Describe the issue

I am creating onnxruntime session using tensorRT. While evaluating the model's output, The tolerance levels (atol and rtol) both could be passed at most at the value of 1e-3, when compared for cudaSession, cpuSession and tensorRT session.

However, When I tested it with Nvidia's polygraphy tool. For the atol and rtol values, they are getting passed for 1e-5, both.

To reproduce

Following code is used for the inference

import numpy as np
import torch
import onnxruntime as ort
def main():
    providers = [
                    ('CUDAExecutionProvider', {
                                    'device_id': 0,
                                    'cudnn_conv_algo_search': 'DEFAULT',
                                })
                ]
    providers_2= [
                    ('TensorrtExecutionProvider', {
                        'device_id': 0,                       # Select GPU to execute
                        "trt_engine_cache_enable": True,
                        "trt_engine_cache_path": "trt_models/"
                    }),
                    ('CUDAExecutionProvider', {
                        'device_id': 0,
                        'cudnn_conv_algo_search': 'DEFAULT',
                    })
                ]
            
    model_path = "./onnx_test_model/voxceleb_resnet293_LM.onnx"   
    sess_options = ort.SessionOptions()
    sess_options.inter_op_num_threads = 1
    sess_options.intra_op_num_threads = 1
    session_ = ort.InferenceSession(model_path, sess_options=sess_options, providers=providers)
    session_2 = ort.InferenceSession(model_path, sess_options=sess_options, providers=providers_2)
    for shape in list(range(860, 900, 10)):
        test = np.random.randn(1,shape,80).astype(np.float32)
        embedding_1 = session_.run(output_names=["embs"], input_feed={"feats": test} )[0][0]
        embeddings_2 = session_2.run(output_names=["embs"], input_feed={"feats": test},)[0][0]
        comparison_result = np.allclose(embedding_1, embeddings_2, rtol=1e-05, atol=1e-05)
        print("comparison_result",comparison_result)

if __name__ == "__main__":
    main()

The model can be downloaded from : https://huggingface.co/Wespeaker/wespeaker-voxceleb-resnet293-LM/blob/main/voxceleb_resnet293_LM.onnx

The polygraphy command is as follows
polygraphy run onnx_test_model/voxceleb_resnet293_LM.onnx --trt --onnxrt --atol 1e-5 --rtol 1e-5 --input-shapes feats:[1,800,80]

I am using the docker image : nvcr.io/nvidia/tensorrt:24.05-py3

Urgency

I need to complete this, as early as possible. I have deadline for this till friday.

Platform

Linux

OS Version

Ubuntu 22.04.4 LTS

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.18

ONNX Runtime API

Python

Architecture

X86

Execution Provider

Default CPU, CUDA, TensorRT

Execution Provider Library Version

cuda_12.4.r12.4, TensorRT-10.0.1.6

@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider model:transformer issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc. labels Jun 10, 2024
@jywu-msft
Copy link
Member

I haven't used polygraphy before, but it looks to me like the comparisons aren't exactly apples to apples here?
for OnnxRuntime, you're comparing TensorRT EP vs CUDA EP with a range of shapes [1, 860, 10] , [1, 870, 10] with random data etc.
for polygraph you're comparing OnnxRuntime CPU EP vs TensorRT with explicit shape [1, 800, 10] and random data.
In theory, if you feed in the same exact input data/shape (not random) to OnnxRuntime TensorRT EP and polygraphy with TensorRT backend, they should both return the same output (assuming they are using the same TensorRT version). Can you confirm that is the case?
+@kevinch-nv for any advice he can provide with using polygraphy.

@akmalmasud96
Copy link
Author

@jywu-msft

Thanks for replying. Let me explain this in a bit more detail. Polygraphy uses its own CPU ONNXRuntime session to compare the output with TensorRT. The shape we provide for the input will also be used to generate the same random input for the CPU ONNXRuntime, and this is passing the test. We can provide any input shape we want.

When I use ONNXRuntime for creating the TensorRT session, it did not pass the test when comparing the output with the ONNXRuntime CPU session. The same goes for using the ONNXRuntime CUDA session compared to the CPU ONNXRuntime session; the test did not pass. I think some optimization is happening while the ONNX model is running on the GPU, which drops the accuracy.

I also tested this with Polygraphy. I changed the ONNXRuntime session from the CPU provider to the GPU provider in Polygraphy and compared it with Polygraphy's version of TensorRT. The results failed while they passed when I used the CPU provider for comparison.

Additionally, I provided the Dynamic Shape Profiling while converting the model using ONNXRuntime but did not get any benefit.

I also compared the output with Torch. The most similar results were between the ONNXRuntime CPU session and Polygraphy's version of TensorRT. Attached are the benchmarks I did.

Untitled

@jywu-msft
Copy link
Member

@jywu-msft

Thanks for replying. Let me explain this in a bit more detail. Polygraphy uses its own CPU ONNXRuntime session to compare the output with TensorRT. The shape we provide for the input will also be used to generate the same random input for the CPU ONNXRuntime, and this is passing the test. We can provide any input shape we want.

When I use ONNXRuntime for creating the TensorRT session, it did not pass the test when comparing the output with the ONNXRuntime CPU session. The same goes for using the ONNXRuntime CUDA session compared to the CPU ONNXRuntime session; the test did not pass. I think some optimization is happening while the ONNX model is running on the GPU, which drops the accuracy.

I also tested this with Polygraphy. I changed the ONNXRuntime session from the CPU provider to the GPU provider in Polygraphy and compared it with Polygraphy's version of TensorRT. The results failed while they passed when I used the CPU provider for comparison.

Additionally, I provided the Dynamic Shape Profiling while converting the model using ONNXRuntime but did not get any benefit.

I also compared the output with Torch. The most similar results were between the ONNXRuntime CPU session and Polygraphy's version of TensorRT. Attached are the benchmarks I did.

Untitled

thanks for the explanation. I'm going to reopen this.
it's probably worth taking a closer look to see where the difference is coming from, whether there's some optimization pass in ORT that changes the graph. +@chilo-ms , can you take a look when you have time?

@jywu-msft jywu-msft reopened this Jun 18, 2024
@akmalmasud96
Copy link
Author

Hi @jywu-msft, any update on this?

@jingyanwangms
Copy link
Contributor

@akmalmasud96 I tried the repro script and it passed (comparison_result True for all 4 dimensions). Since the atol and rtol values are set to 1e-05, this is the expected behavior as far as I understand. Please confirm.
I'm using the same docker image, latest onnxruntime onnxruntime-gpu 1.19.2. Please give this a try, it might have been fixed in latest release

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider model:transformer issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
Projects
None yet
Development

No branches or pull requests

4 participants