Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Build] #19062

Closed
panbo-bridge opened this issue Jan 9, 2024 · 0 comments
Closed

[Build] #19062

panbo-bridge opened this issue Jan 9, 2024 · 0 comments
Labels
build build issues; typically submitted using template ep:CUDA issues related to the CUDA execution provider platform:windows issues related to the Windows platform

Comments

@panbo-bridge
Copy link

Describe the issue

I want to use onnxruntime-gpu to accelerate the inference of models, and I need to consider compatibility with a Chinese environment. I've found that when onnxruntime-gpu is in a Chinese path, onnxruntime-provider-cuda cannot be loaded. Is there a way to solve this problem?

Urgency

No response

Target platform

windows

Build script

pip install onnxruntime_gpu-1.16.3-cp38-cp38-win_amd64.whl

Error / output

image

Visual Studio Version

No response

GCC / Compiler Version

No response

@panbo-bridge panbo-bridge added the build build issues; typically submitted using template label Jan 9, 2024
@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider platform:windows issues related to the Windows platform labels Jan 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build build issues; typically submitted using template ep:CUDA issues related to the CUDA execution provider platform:windows issues related to the Windows platform
Projects
None yet
Development

No branches or pull requests

1 participant