We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed dtype: float32, float16 and bfloat16. AMP passed python benchmarks/dynamo/huggingface.py --accuracy --float32 -d xpu -n10 --training--only DebertaV2ForQuestionAnswering --backend=inductor xpu train DebertaV2ForQuestionAnswering E1220 16:43:35.601000 756971 site-packages/torch/_dynamo/utils.py:2307] RMSE (res-fp64): 0.53515, (ref-fp64): 0.01636 and shape=torch.Size([]). res.dtype: torch.float32, multiplier: 3.000000, tol: 0.010000, use_larger_multiplier_for_smaller_tensor: 0 fail_accuracy
env: python: 3.10 XPU_OPS: 9ed0a1a TRITON_COMMIT_ID: e98b6fcb8df5b44eb0d0addb6767c573d37ba024 TORCH_COMMIT_ID: 4f8b7c4272db521f7ffc4070ce1bdece513d1183 TRANSFORMERS_VERSION: 243e186efbf7fb93328dd6b34927a4e8c8f24395 DRIVER_VERSION: 1.23.10.49.231129.50 KERNEL_VERSION: 5.15.0-73-generic #80-Ubuntu SMP Mon May 15 15:18:26 UTC 2023 BUNDLE_VERSION: 2025.0.1.20241113 OS_PRETTY_NAME: Ubuntu 22.04.2 LTS GCC_VERSION: 11
The text was updated successfully, but these errors were encountered:
Caused by pytorch/pytorch@2980aed BTW, if upgrade transformers to latest, it will be fixed
Sorry, something went wrong.
Skip DebertaV2ForQuestionAnswering training accuracy check (#1239)
f0819d2
Issue: #1216
Passed in https://github.com/intel/torch-xpu-ops/actions/runs/12600446562
Update weekly accuracy reference (#1223)
ababdb4
Last reference updated is 20240709 Related issues: - [x] #1216 - [x] #1217 - [x] #1219 - [x] #1220 - [ ] #1221 - [x] #1222 - [ ] #1256 - [ ] #1260 - [ ] #1261 - [ ] #1262 - [ ] #1263 - [ ] #1264 - [ ] #1273 - [ ] #1274 - [ ] #1275 - [ ] #1276 - [ ] #1277 - [ ] #1278 - [ ] #508 - [ ] #509 - [ ] #510
e9b933e
No branches or pull requests
🐛 Describe the bug
Failed dtype: float32, float16 and bfloat16. AMP passed
python benchmarks/dynamo/huggingface.py --accuracy --float32 -d xpu -n10 --training--only DebertaV2ForQuestionAnswering --backend=inductor
xpu train DebertaV2ForQuestionAnswering
E1220 16:43:35.601000 756971 site-packages/torch/_dynamo/utils.py:2307] RMSE (res-fp64): 0.53515, (ref-fp64): 0.01636 and shape=torch.Size([]). res.dtype: torch.float32, multiplier: 3.000000, tol: 0.010000, use_larger_multiplier_for_smaller_tensor: 0
fail_accuracy
Versions
env:
python: 3.10
XPU_OPS: 9ed0a1a
TRITON_COMMIT_ID: e98b6fcb8df5b44eb0d0addb6767c573d37ba024
TORCH_COMMIT_ID: 4f8b7c4272db521f7ffc4070ce1bdece513d1183
TRANSFORMERS_VERSION: 243e186efbf7fb93328dd6b34927a4e8c8f24395
DRIVER_VERSION: 1.23.10.49.231129.50
KERNEL_VERSION: 5.15.0-73-generic #80-Ubuntu SMP Mon May 15 15:18:26 UTC 2023
BUNDLE_VERSION: 2025.0.1.20241113
OS_PRETTY_NAME: Ubuntu 22.04.2 LTS
GCC_VERSION: 11
The text was updated successfully, but these errors were encountered: