-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error Code 9: API Usage Error (Target GPU SM 70 is not supported by this TensorRT release.) #2400
Comments
Wow ok...then anyway to get the old version's wheel? or Which is the commit right before removal of SM70 so I can build from src? |
You may try this commit f14d1d4 |
You mean https://github.com/NVIDIA/TensorRT-LLM/tree/3c46c2794e7f6df48250a68de6240994a77a26a7? I see that most of the code changes are after this |
Another related qns would be, is it possible to build the previous commit with TensorRT 10.5, instead of 10.4? |
Yes, we release the code in weekly bias, so there're lots of changes. |
I don't have a try on that but I wouldn't recommend you have a try since it may raise unknown issues. |
Thank you. |
I had the same problem. |
System Info
TensorRT-LLM version: 0.15.0.dev2024102900
Using Tesla V100 SXM2 16GB.
Following the official instructions and official wheel.
Building BLIP2-OPT failed with
Tried to build from source and specifying "70-real" using
python3 ./scripts/build_wheel.py --trt_root /home/ubuntu/TensorRT-LLM/TensorRT-10.5.0.18 --cuda_architectures "70-real;75-real"
Produced wheel is 900MB. Still same error.
The same wheel and command works on T4.
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Building BLIP2-OPT failed with official cmd shown in the example readme, with some of my own args.
Expected behavior
Works on V100
actual behavior
Does not work
additional notes
NIL
The text was updated successfully, but these errors were encountered: