You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found that sometimes the computation order will significantly affect the computation result when using onnxruntime. Let's consider this graph:
I think the computation result should be consistent no matter we first compute o1 or o2. However, I got totally different results . I attached the reproducible script in the following.
To reproduce
importonnxruntimeasortimportonnximportnumpyasnpfromnumpyimporttestingimporttorchclassModel0(torch.nn.Module):
def__init__(self):
super().__init__()
defforward(self, x, y):
a=torch.ceil(x)
o2=a.to(dtype=torch.float32)
b=torch.abs(a)
o1=torch.max(y, b)
return (o1, o2)
classModel1(torch.nn.Module):
def__init__(self):
super().__init__()
defforward(self, x, y):
a=torch.ceil(x)
b=torch.abs(a)
o1=torch.max(y, b)
o2=a.to(dtype=torch.float32)
return (o1, o2)
model_0=Model0()
model_1=Model1()
input_data_0=np.float16(np.random.rand(1))
input_data_1=np.float16(np.random.rand(23,1,3)) # if this is other shape, the two models will pass the testinput_dict= {'i1':input_data_0, 'i2':input_data_1}
inputs=tuple(torch.from_numpy(v).to('cpu') for_, vininput_dict.items())
torch.onnx.export(model_0, inputs, '0.onnx', verbose=False, input_names=['i1', 'i2'], output_names=['o1', 'o2'], opset_version=14, do_constant_folding=False)
torch.onnx.export(model_1, inputs, '1.onnx', verbose=False, input_names=['i1', 'i2'], output_names=['o1', 'o2'], opset_version=14, do_constant_folding=False)
sess_options=ort.SessionOptions()
sess_options.graph_optimization_level=ort.GraphOptimizationLevel.ORT_ENABLE_ALLsess_0=ort.InferenceSession('0.onnx',providers=['CPUExecutionProvider'],sess_options=sess_options)
sess_res_0=sess_0.run(["o1","o2"], input_dict)
sess_1=ort.InferenceSession('1.onnx',providers=['CPUExecutionProvider'],sess_options=sess_options)
sess_res_1=sess_1.run(["o1", "o2"], input_dict)
try:
foriinrange(2):
testing.assert_allclose(sess_res_0[i], sess_res_1[i])
print("passed the test")
exceptAssertionErrorase:
print("did not pass the test")
print(e)
The execution result is:
did not pass the test
Not equal to tolerance rtol=1e-07, atol=0
Mismatched elements: 1 / 1 (100%)
Max absolute difference: 0.9921732
Max relative difference: 0.9921732
x: array([0.007827], dtype=float32)
y: array([1.], dtype=float32)
I found that if the input shape is changed, then the two models will produce the same result. That's very weird. I think this is likely a bug. Can anyone help?
Urgency
It is very urgent.
Platform
Linux
OS Version
Ubuntu 22.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.15.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered:
Describe the issue
I found that sometimes the computation order will significantly affect the computation result when using onnxruntime. Let's consider this graph:
I think the computation result should be consistent no matter we first compute
o1
oro2
. However, I got totally different results . I attached the reproducible script in the following.To reproduce
The execution result is:
I found that if the input shape is changed, then the two models will produce the same result. That's very weird. I think this is likely a bug. Can anyone help?
Urgency
It is very urgent.
Platform
Linux
OS Version
Ubuntu 22.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.15.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: