-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DirectML EP gives wrong results on both Integrated and Discrete GPUs #19837
Comments
@fdwr can you take a look? |
@smk2007 @martinb35 (who are focused on this more so than I am now)
Without the model, it will be a challenge to isolate, and so we can only offer ideas on how you might be able to find it. Some approaches I take include:
(updated 2024-03-20)
|
Hi @fdwr . Thanks for the reply!. How can I unregister ops in OperatorRegistration.cpp?. Do I just set |
Easiest is just to comment the potential lines |
I'm trying to run inference on a model in C++, but as it turns out, I get completely wrong results when I run it on DirectML EP, but running it on CPU works just fine.
Sample outputs:
CPU (Correct):
7.9641705378890038e-03
5.1060058176517487e-03
1.6217185184359550e-03
3.2245237380266190e-03
1.9704271107912064e-03
-2.6769749820232391e-03
-3.4305416047573090e-03
-5.9413574635982513e-03
-8.1969611346721649e-03
-5.6667793542146683e-03
-4.0454901754856110e-03
-4.5984257012605667e-03
-2.4918280541896820e-03
-5.0726905465126038e-04
-1.3493187725543976e-03
GPU | DirectML EP(Incorrect):
7.63121
2.10706
-15.587
-8.67914
-20.8112
-37.6199
-15.6217
-13.4909
45.5337
82.2263
95.6195
45.5667
124.225
188.378
167.306
142.281
Init Code for reference
When I run inference, it throws this warning :
I logged the run with verbose and some nodes are mapped into CPU :
Additional Info:
CPU: Intel i9-12900H
Integrated GPU : Intel(R) Iris(R) Xe Graphics
GPU: RTX 3080Ti 16GB
To reproduce
Unfortunately I can't reveal the model. :(
Urgency
It is kinda urgent due to a project deadline..
Platform
Windows
OS Version
11
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.17.0
ONNX Runtime API
C++
Architecture
X64
Execution Provider
DirectML
Execution Provider Library Version
1.13.1
Model File
:(
Is this a quantized model?
No
The text was updated successfully, but these errors were encountered: