Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not build solution.!! #19890

Closed
hy846130226 opened this issue Mar 13, 2024 · 8 comments
Closed

Could not build solution.!! #19890

hy846130226 opened this issue Mar 13, 2024 · 8 comments
Labels
build build issues; typically submitted using template ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform

Comments

@hy846130226
Copy link

Describe the issue

I pull the V1.6.3 code.
and use the following command: .\build.bat --cudnn_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8" --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8" --use_tensorrt --tensorrt_home “C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\TensorRT-8.6.1.6” --cmake_generator "Visual Studio 17 2022"

but when I got the solution, I COULD NOT build the nvonnxparser_static project

image

Urgency

No response

Target platform

C++

Build script

.\build.bat --cudnn_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8" --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8" --use_tensorrt --tensorrt_home “C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\TensorRT-8.6.1.6” --cmake_generator "Visual Studio 17 2022"

Error / output

C2664 “bool google::protobuf::TextFormat::Parse(google::protobuf::io::ZeroCopyInputStream *,google::protobuf::Message *)”: could not transfrom “onnx::ModelProto *” into “google::protobuf::Message *” nvonnxparser_static D:\onnxruntime\onnxruntime\build\Windows\Debug_deps\onnx_tensorrt-src\ModelImporter.cpp 358

Visual Studio Version

No response

GCC / Compiler Version

No response

@hy846130226 hy846130226 added the build build issues; typically submitted using template label Mar 13, 2024
@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform labels Mar 13, 2024
@jywu-msft
Copy link
Member

jywu-msft commented Mar 13, 2024

please build RelWithDebInfo or Release flavor

@jywu-msft jywu-msft removed the ep:CUDA issues related to the CUDA execution provider label Mar 13, 2024
@jywu-msft
Copy link
Member

@yf711 can you check if Debug build has new issues?

@hy846130226
Copy link
Author

Thanks for your help @jywu-msft !
The release version build successful.
But I also have a question:

I use the nuget package to load the onnxruntime-gpu version 1.16.3, and I could see the folder like:
image

My question is:
How could I build the 1.16.3 onnxruntime.dll&onnxruntime.lib?

@yf711
Copy link
Contributor

yf711 commented Mar 14, 2024

@hy846130226 You can build ORT with --build_shared_lib to generate onnxruntime.dll/lib

Actually, I tried your command to build ORT in debug mode and it passed on my side.
One thing to call out, if you would like to use open-sourced onnx-tensorrt parser, please add --use_tensorrt_oss_parser to your ORT build command. The default behavior without this arg will be using built-in parser from your tensorrt 8.6 binary

@hy846130226
Copy link
Author

@hy846130226 You can build ORT with --build_shared_lib to generate onnxruntime.dll/lib

Actually, I tried your command to build ORT in debug mode and it passed on my side. One thing to call out, if you would like to use open-sourced onnx-tensorrt parser, please add --use_tensorrt_oss_parser to your ORT build command. The default behavior without this arg will be using built-in parser from your tensorrt 8.6 binary

Hi @yf711,
thank you very much!

I do not know "You can build ORT with --build_shared_lib to generate onnxruntime.dll/lib", could you please tell me more details or what should I do?

.\build.bat --cudnn_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8" --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8" --use_tensorrt --tensorrt_home “C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\TensorRT-8.6.1.6” --cmake_generator "Visual Studio 17 2022" --buildshared_lib

Is this your means?

@hy846130226
Copy link
Author

@hy846130226 You can build ORT with --build_shared_lib to generate onnxruntime.dll/lib

Actually, I tried your command to build ORT in debug mode and it passed on my side. One thing to call out, if you would like to use open-sourced onnx-tensorrt parser, please add --use_tensorrt_oss_parser to your ORT build command. The default behavior without this arg will be using built-in parser from your tensorrt 8.6 binary

And my tensorrt version is 8.6.1.6, according to the official onnxruntime website, "The TensorRT execution provider for ONNX Runtime is built and tested with TensorRT 8.6.".

So I don't think I should use the "--use_tensorrt_oss_parser" command

@hy846130226
Copy link
Author

Here is the folder snapshot after I build ALL_BUILD:
image
There is no onnxruntime.dll&onnxruntime.lib?

@hy846130226
Copy link
Author

Hi @yf711,

Thanks for your help.
I have resolved this problem according to your words!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build build issues; typically submitted using template ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform
Projects
None yet
Development

No branches or pull requests

3 participants