Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Pad's quantization #17807

Merged
merged 1 commit into from
Oct 9, 2023

Fix Pad's quantization and add a test

83a55d4
Select commit
Loading
Failed to load commit list.
Merged

Fix Pad's quantization #17807

Fix Pad's quantization and add a test
83a55d4
Select commit
Loading
Failed to load commit list.
Azure Pipelines / Linux GPU CI Pipeline succeeded Oct 6, 2023 in 1h 13m 7s

Build #20231006.18 had test failures

Details

Tests

  • Failed: 2 (0.01%)
  • Passed: 18,926 (99.80%)
  • Other: 36 (0.19%)
  • Total: 18,964

Annotations

Check failure on line 1 in Run/cuda__models_zoo_opset7_ResNet101_DUC_HDC_ResNet101DUC7

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Linux GPU CI Pipeline

Run/cuda__models_zoo_opset7_ResNet101_DUC_HDC_ResNet101DUC7

/onnxruntime_src/onnxruntime/test/providers/cpu/model_tests.cc:800
Failed
Non-zero status code returned while running Conv node. Name:'conv2_3_1x1_increase' Status Message: /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 2: out of memory ; GPU=0 ; hostname=2021fa00773e ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_allocator.cc ; line=47 ; expr=cudaMalloc((void**)&p, size); 


Raw output
/onnxruntime_src/onnxruntime/test/providers/cpu/model_tests.cc:800
Failed
Non-zero status code returned while running Conv node. Name:'conv2_3_1x1_increase' Status Message: /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 2: out of memory ; GPU=0 ; hostname=2021fa00773e ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_allocator.cc ; line=47 ; expr=cudaMalloc((void**)&p, size); 


Check failure on line 1 in Run/cuda__models_opset7_keras_lotus_resnet3D_keras_lotus_resnet3D

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Linux GPU CI Pipeline

Run/cuda__models_opset7_keras_lotus_resnet3D_keras_lotus_resnet3D

/onnxruntime_src/onnxruntime/test/providers/cpu/model_tests.cc:800
Failed
Non-zero status code returned while running FusedConv node. Name:'Convolution14479' Status Message: /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 2: out of memory ; GPU=0 ; hostname=2021fa00773e ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_allocator.cc ; line=47 ; expr=cudaMalloc((void**)&p, size); 


Raw output
/onnxruntime_src/onnxruntime/test/providers/cpu/model_tests.cc:800
Failed
Non-zero status code returned while running FusedConv node. Name:'Convolution14479' Status Message: /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 2: out of memory ; GPU=0 ; hostname=2021fa00773e ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_allocator.cc ; line=47 ; expr=cudaMalloc((void**)&p, size);