-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some thoughts of defining ULP tolerance of exp op #288
Comments
Note the appendix A since moved to chapter 48. |
Are the float64 input values cast to float32 first before being passed through the baseline reference? Otherwise that can contribute additional differences (it's not enough alone to account for the variance, but a little) since DML would receive a rounded float32 input tensor whereas the baseline would receive the original full precision value. I ask because some of the values don't appear directly representable in float32 but rather between expressible float32 values. e.g.
|
Close it by @fdwr has already provided this tolerance criterial as below on #265 (comment).
|
Using ULP based comparison method, we ran
exp
op float32 tests by WebNN Native DirectML backend, observed that there're fluctuation ULP distances with different inputs.Referring to Appendix A: Vulkan Environment for SPIR-V / Precision and Operation of SPIR-V Instructions, there's a single precision definition for
exp
instruction as followingAbove table is from Table 85. Precision of GLSL.std.450 Instructions.
Then, we could have these ULP distance information:
According to above observation, I don't think the ULP tolerance being a fixed value applies to
exp
op.@wchao1115 PTAL, thanks.
The text was updated successfully, but these errors were encountered: