Add microbenchmark for layer normalization and improve latency #22223
Azure Pipelines / ONNX Runtime Web CI Pipeline
succeeded
Oct 14, 2024 in 37m 45s
Build #20241014.16 had test failures
Details
- Failed: 1 (0.01%)
- Passed: 8,407 (99.77%)
- Other: 18 (0.21%)
- Total: 8,426
Annotations
Check failure on line 1 in LayerNorm_Scale_Bias_Float16InputScaleBiasOutput
azure-pipelines / ONNX Runtime Web CI Pipeline
LayerNorm_Scale_Bias_Float16InputScaleBiasOutput
/mnt/vss/_work/1/s/onnxruntime/test/providers/checkers.cc:437
The difference between f_expected[i] and f_actual[i] is inf, which exceeds tolerance, where
f_expected[i] evaluates to -0.051788330078125,
f_actual[i] evaluates to inf, and
tolerance evaluates to 0.0025517882313579321.
i:2
Google Test trace:
/mnt/vss/_work/1/s/onnxruntime/test/providers/checkers.cc:568: provider type: CPUExecutionProvider
/mnt/vss/_work/1/s/onnxruntime/test/providers/base_tester.cc:830: registered execution providers: CPUExecutionProvider
Raw output
/mnt/vss/_work/1/s/onnxruntime/test/providers/checkers.cc:437
The difference between f_expected[i] and f_actual[i] is inf, which exceeds tolerance, where
f_expected[i] evaluates to -0.051788330078125,
f_actual[i] evaluates to inf, and
tolerance evaluates to 0.0025517882313579321.
i:2
Google Test trace:
/mnt/vss/_work/1/s/onnxruntime/test/providers/checkers.cc:568: provider type: CPUExecutionProvider
/mnt/vss/_work/1/s/onnxruntime/test/providers/base_tester.cc:830: registered execution providers: CPUExecutionProvider
Loading