Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Address Sanitizer in CI #19073

Merged
merged 62 commits into from
Jan 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
23e3954
update
snnn Dec 20, 2023
3aa4f3f
update
snnn Jan 5, 2024
8051b80
update
snnn Jan 5, 2024
b982118
Update win-gpu-reduce-op-ci-pipeline.yml for Azure Pipelines
snnn Jan 5, 2024
c3173a0
Update win-gpu-tensorrt-ci-pipeline.yml for Azure Pipelines
snnn Jan 5, 2024
410a7e8
update
snnn Jan 5, 2024
15b37d9
update
snnn Jan 5, 2024
6eb32b2
Update linux-ci-pipeline.yml for Azure Pipelines
snnn Jan 5, 2024
b92f782
update
snnn Jan 5, 2024
b745717
update
snnn Jan 5, 2024
5bb03f7
update
snnn Jan 5, 2024
3d099ee
update
snnn Jan 5, 2024
4e09bf4
update
snnn Jan 5, 2024
473cb24
update
snnn Jan 5, 2024
2d97666
update
snnn Jan 5, 2024
096bae5
update
snnn Jan 5, 2024
d73b412
update
snnn Jan 5, 2024
18e20f5
update
snnn Jan 5, 2024
cc9e5f7
update
snnn Jan 5, 2024
66462ed
update
snnn Jan 5, 2024
d625ea2
update
snnn Jan 5, 2024
776296c
update
snnn Jan 6, 2024
e1f2331
update
snnn Jan 8, 2024
bb49d6d
update
snnn Jan 8, 2024
8b8bb85
i[date
snnn Jan 8, 2024
3bc6154
update
snnn Jan 9, 2024
42ace4a
Merge remote-tracking branch 'upstream/snnn/santi' into snnn/santi
snnn Jan 9, 2024
92f5bde
update
snnn Jan 9, 2024
538b81e
update
snnn Jan 9, 2024
de6b96d
update
snnn Jan 9, 2024
d42959e
update
snnn Jan 9, 2024
3c3a751
update
snnn Jan 9, 2024
a386834
update
snnn Jan 9, 2024
3b2a7c6
update
snnn Jan 9, 2024
ea7fb81
update
snnn Jan 9, 2024
e42ed89
update
snnn Jan 9, 2024
ad49d05
update
snnn Jan 10, 2024
9096b54
update
snnn Jan 10, 2024
dca4c62
update
snnn Jan 10, 2024
a72435a
update
snnn Jan 10, 2024
4af254b
Merge remote-tracking branch 'origin/main' into snnn/santi
snnn Jan 10, 2024
d3454c0
update
snnn Jan 10, 2024
775c0a3
update
snnn Jan 10, 2024
346afee
upgrade xcode
snnn Jan 10, 2024
3124328
upgrade xcode
snnn Jan 10, 2024
9693860
update
snnn Jan 10, 2024
14b0dfc
update
snnn Jan 10, 2024
73c3479
update
snnn Jan 10, 2024
fd35dca
Merge remote-tracking branch 'origin/snnn/upgrade_xcode' into snnn/santi
snnn Jan 10, 2024
6b6bea8
Merge remote-tracking branch 'origin/main' into snnn/santi
snnn Jan 10, 2024
e3d4fa1
format code
snnn Jan 10, 2024
5d4b3da
update
snnn Jan 11, 2024
149fad6
update
snnn Jan 11, 2024
1e784bf
Merge branch 'snnn/santi' of https://github.com/Microsoft/onnxruntime…
snnn Jan 11, 2024
da14f26
Merge remote-tracking branch 'origin/main' into snnn/santi
snnn Jan 11, 2024
261d488
update
snnn Jan 11, 2024
ff3257c
update nuget package pipeline
snnn Jan 11, 2024
3add433
update
snnn Jan 11, 2024
5b1a2e3
update
snnn Jan 11, 2024
91908a1
update
snnn Jan 11, 2024
a0cbb21
update
snnn Jan 11, 2024
dbefbbe
set xcode version
snnn Jan 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .pipelines/windowsai-steps.yml
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ jobs:
7z x cmake-3.26.3-windows-x86_64.zip
set PYTHONHOME=$(Build.BinariesDirectory)\${{ parameters.PythonPackageName }}.3.9.7\tools
set PYTHONPATH=$(Build.BinariesDirectory)\${{ parameters.PythonPackageName }}.3.9.7\tools
$(Build.BinariesDirectory)\${{ parameters.PythonPackageName }}.3.9.7\tools\python.exe "$(Build.SourcesDirectory)\tools\ci_build\build.py" --build_dir $(Build.BinariesDirectory) --build_shared_lib --enable_onnx_tests --ms_experimental --use_dml --use_winml --cmake_generator "Visual Studio 17 2022" --update --config RelWithDebInfo --enable_lto --use_telemetry --disable_rtti --enable_wcos $(BuildFlags) --cmake_extra_defines "CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO=/PROFILE" "CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO=/PROFILE" CMAKE_SYSTEM_VERSION=10.0.19041.0 --cmake_path $(Build.BinariesDirectory)\cmake-3.26.3-windows-x86_64\bin\cmake.exe --ctest_path $(Build.BinariesDirectory)\cmake-3.26.3-windows-x86_64\bin\ctest.exe
$(Build.BinariesDirectory)\${{ parameters.PythonPackageName }}.3.9.7\tools\python.exe "$(Build.SourcesDirectory)\tools\ci_build\build.py" --build_dir $(Build.BinariesDirectory) --build_shared_lib --enable_onnx_tests --ms_experimental --use_dml --use_winml --cmake_generator "Visual Studio 17 2022" --update --config RelWithDebInfo --enable_qspectre --enable_lto --use_telemetry --disable_rtti --enable_wcos $(BuildFlags) --cmake_extra_defines "CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO=/PROFILE" "CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO=/PROFILE" CMAKE_SYSTEM_VERSION=10.0.19041.0 --cmake_path $(Build.BinariesDirectory)\cmake-3.26.3-windows-x86_64\bin\cmake.exe --ctest_path $(Build.BinariesDirectory)\cmake-3.26.3-windows-x86_64\bin\ctest.exe
workingDirectory: '$(Build.BinariesDirectory)'
displayName: 'Generate cmake config'

Expand Down
42 changes: 2 additions & 40 deletions cmake/adjust_global_compile_flags.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -74,11 +74,6 @@ if (onnxruntime_MINIMAL_BUILD)
endif()

if (MSVC)
# turn on LTO (which adds some compiler flags and turns on LTCG) unless it's a Debug build to minimize binary size
if (NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
set(onnxruntime_ENABLE_LTO ON)
endif()

# undocumented internal flag to allow analysis of a minimal build binary size
if (ADD_DEBUG_INFO_TO_MINIMAL_BUILD)
string(APPEND CMAKE_CXX_FLAGS " /Zi")
Expand Down Expand Up @@ -267,37 +262,11 @@ if (MSVC)
string(APPEND CMAKE_C_FLAGS " /arch:AVX512")
endif()

if (NOT GDK_PLATFORM)
add_compile_definitions(WINAPI_FAMILY=100) # Desktop app
message("Building ONNX Runtime for Windows 10 and newer")
add_compile_definitions(WINVER=0x0A00 _WIN32_WINNT=0x0A00 NTDDI_VERSION=0x0A000000)
endif()
if (onnxruntime_ENABLE_LTO AND NOT onnxruntime_USE_CUDA)
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /Gw /GL")
set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} /Gw /GL")
set(CMAKE_CXX_FLAGS_MINSIZEREL "${CMAKE_CXX_FLAGS_MINSIZEREL} /Gw /GL")
endif()

# The WinML build tool chain builds ARM/ARM64, and the internal tool chain does not have folders for spectre mitigation libs.
# WinML performs spectre mitigation differently.
if (NOT DEFINED onnxruntime_DISABLE_QSPECTRE_CHECK)
check_cxx_compiler_flag(-Qspectre HAS_QSPECTRE)
if (HAS_QSPECTRE)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /Qspectre")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Qspectre")
endif()
endif()
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /DYNAMICBASE")
check_cxx_compiler_flag(-guard:cf HAS_GUARD_CF)
if (HAS_GUARD_CF)
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /guard:cf")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /guard:cf")
set(CMAKE_C_FLAGS_RELWITHDEBINFO "${CMAKE_C_FLAGS_RELWITHDEBINFO} /guard:cf")
set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} /guard:cf")
set(CMAKE_C_FLAGS_MINSIZEREL "${CMAKE_C_FLAGS_MINSIZEREL} /guard:cf")
set(CMAKE_CXX_FLAGS_MINSIZEREL "${CMAKE_CXX_FLAGS_MINSIZEREL} /guard:cf")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /guard:cf")
endif()
else()
if (NOT APPLE)
#XXX: Sometimes the value of CMAKE_SYSTEM_PROCESSOR is set but it's wrong. For example, if you run an armv7 docker
Expand Down Expand Up @@ -378,16 +347,9 @@ else()

endif()

if (${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
#For Mac compliance
message("Adding flags for Mac builds")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fstack-protector-strong")
elseif (WIN32)
# parallel build
# These compiler opitions cannot be forwarded to NVCC, so cannot use add_compiler_options
string(APPEND CMAKE_CXX_FLAGS " /MP")
if (WIN32)
# required to be set explicitly to enable Eigen-Unsupported SpecialFunctions
string(APPEND CMAKE_CXX_FLAGS " -DEIGEN_HAS_C99_MATH")
else()
elseif(LINUX)
add_compile_definitions("_GNU_SOURCE")
endif()
4 changes: 4 additions & 0 deletions onnxruntime/test/framework/bfc_arena_test.cc
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.

#include <absl/base/config.h>
#include "core/framework/bfc_arena.h"
#include "core/framework/allocator_utils.h"
#include "gtest/gtest.h"
Expand Down Expand Up @@ -164,6 +165,8 @@ void TestCustomMemoryLimit_ProcessException(const OnnxRuntimeException& ex) {
#endif // #ifdef GTEST_USES_POSIX_RE
}

// Address Sanitizer would report allocation-size-too-big if we don't disable this test.
#ifndef ABSL_HAVE_ADDRESS_SANITIZER
TEST(BFCArenaTest, TestCustomMemoryLimit) {
{
// Configure a 1MiB byte limit
Expand Down Expand Up @@ -214,6 +217,7 @@ TEST(BFCArenaTest, TestCustomMemoryLimit) {
b.Free(first_ptr);
}
}
#endif

TEST(BFCArenaTest, AllocationsAndDeallocationsWithGrowth) {
// Max of 2GiB, but starts out small.
Expand Down
3 changes: 3 additions & 0 deletions onnxruntime/test/framework/tunable_op_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -459,6 +459,8 @@ class TunableVecAddSelectFastestIfSupported : public TunableOp<VecAddParamsRecor
}
};

// We run Android tests in a simulator so the result might be different
#if defined(__ANDROID__) && defined(NDEBUG)
TEST(TunableOp, SelectFastestIfSupported) {
#ifdef ORT_NO_RTTI
GTEST_SKIP() << "TunableOp needs RTTI to work correctly";
Expand All @@ -483,6 +485,7 @@ TEST(TunableOp, SelectFastestIfSupported) {
ASSERT_EQ(last_run, "FastestNarrow");
#endif
}
#endif

TEST(TunableOp, DisabledWithManualSelection) {
#ifdef ORT_NO_RTTI
Expand Down
12 changes: 11 additions & 1 deletion onnxruntime/test/logging_apis/test_logging_apis.cc
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
#pragma GCC diagnostic pop
#endif
#endif

#include <absl/base/config.h>
#include "gtest/gtest.h"

// Manually initialize the Ort API object for every test.
Expand Down Expand Up @@ -167,7 +167,13 @@ TEST_F(RealCAPITestsFixture, CApiLoggerLogMessage) {
ORT_FILE, line_num, static_cast<const char*>(__FUNCTION__)));
}

// The code below where it tests for formatting error generates an out-of-bound memory access. Therefore we disable it
// when memory sanitizer is enabled.
#ifdef ABSL_HAVE_ADDRESS_SANITIZER
TEST_F(RealCAPITestsFixture, DISABLED_CppApiORTCXXLOG) {
#else
TEST_F(RealCAPITestsFixture, CppApiORTCXXLOG) {
#endif
// Tests the output and filtering of the ORT_CXX_LOG and ORT_CXX_LOG_NOEXCEPT macros in the C++ API.
// The first two calls go through, but the last two calls are filtered out due to an insufficient severity.

Expand Down Expand Up @@ -203,7 +209,11 @@ TEST_F(RealCAPITestsFixture, CppApiORTCXXLOG) {
ORT_CXX_LOG_NOEXCEPT(cpp_ort_logger, OrtLoggingLevel::ORT_LOGGING_LEVEL_INFO, "Ignored2");
}

#ifdef ABSL_HAVE_ADDRESS_SANITIZER
TEST_F(RealCAPITestsFixture, DISABLED_CppApiORTCXXLOGF) {
#else
TEST_F(RealCAPITestsFixture, CppApiORTCXXLOGF) {
#endif
// Tests the output and filtering of the ORT_CXX_LOGF and ORT_CXX_LOGF_NOEXCEPT macros in the C++ API.
// The first set of calls go through. The next set of calls are filtered out due to an insufficient severity.
// The last calls have a formatting error and we expect an exception depending on which macro is used.
Expand Down
7 changes: 6 additions & 1 deletion onnxruntime/test/shared_lib/test_inference.cc
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
#include <algorithm>
#include <thread>

#include <absl/base/config.h>

Check warning on line 14 in onnxruntime/test/shared_lib/test_inference.cc

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] onnxruntime/test/shared_lib/test_inference.cc#L14

Found C system header after C++ system header. Should be: test_inference.h, c system, c++ system, other. [build/include_order] [4]
Raw output
onnxruntime/test/shared_lib/test_inference.cc:14:  Found C system header after C++ system header. Should be: test_inference.h, c system, c++ system, other.  [build/include_order] [4]
#include "gtest/gtest.h"
#include "gmock/gmock.h"

Expand Down Expand Up @@ -402,6 +403,8 @@
#endif // DISABLE_CONTRIB_OPS
#endif // !defined(DISABLE_SPARSE_TENSORS)

// Memory leak
#ifndef ABSL_HAVE_ADDRESS_SANITIZER
TEST(CApiTest, custom_op_handler) {
std::cout << "Running custom op inference" << std::endl;

Expand Down Expand Up @@ -435,6 +438,7 @@
custom_op_domain, nullptr);
#endif
}
#endif

#ifdef USE_CUDA
TEST(CApiTest, custom_op_set_input_memory_type) {
Expand Down Expand Up @@ -1452,9 +1456,10 @@
#endif
}

#if defined(__ANDROID__)
// Has memory leak
#if defined(__ANDROID__) || defined(ABSL_HAVE_ADDRESS_SANITIZER)
TEST(CApiTest, DISABLED_test_custom_op_shape_infer_attr) {
// To accomodate a reduced op build pipeline

Check notice on line 1462 in onnxruntime/test/shared_lib/test_inference.cc

View workflow job for this annotation

GitHub Actions / misspell

[misspell] onnxruntime/test/shared_lib/test_inference.cc#L1462

"accomodate" is a misspelling of "accommodate"
Raw output
./onnxruntime/test/shared_lib/test_inference.cc:1462:6: "accomodate" is a misspelling of "accommodate"
#elif defined(REDUCED_OPS_BUILD) && defined(USE_CUDA)
TEST(CApiTest, DISABLED_test_custom_op_shape_infer_attr) {
#else
Expand Down
15 changes: 8 additions & 7 deletions onnxruntime/test/shared_lib/test_ort_format_models.cc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

// custom ops are only supported in a minimal build if explicitly enabled
#if !defined(ORT_MINIMAL_BUILD) || defined(ORT_MINIMAL_BUILD_CUSTOM_OPS)

#include <absl/base/config.h>
#include "core/common/common.h"
#include "core/graph/constants.h"
#include "core/session/onnxruntime_cxx_api.h"
Expand All @@ -16,17 +16,18 @@

extern std::unique_ptr<Ort::Env> ort_env;

static void TestInference(Ort::Env& env, const std::basic_string<ORTCHAR_T>& model_uri,
const std::vector<Input>& inputs, const char* output_name,
const std::vector<int64_t>& expected_dims_y, const std::vector<float>& expected_values_y,
Ort::CustomOpDomain& custom_op_domain, void* cuda_compute_stream = nullptr) {
[[maybe_unused]] static void TestInference(Ort::Env& env, const std::basic_string<ORTCHAR_T>& model_uri,
const std::vector<Input>& inputs, const char* output_name,
const std::vector<int64_t>& expected_dims_y, const std::vector<float>& expected_values_y,

Check warning on line 21 in onnxruntime/test/shared_lib/test_ort_format_models.cc

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] onnxruntime/test/shared_lib/test_ort_format_models.cc#L21

Lines should be <= 120 characters long [whitespace/line_length] [2]
Raw output
onnxruntime/test/shared_lib/test_ort_format_models.cc:21:  Lines should be <= 120 characters long  [whitespace/line_length] [2]
Ort::CustomOpDomain& custom_op_domain, void* cuda_compute_stream = nullptr) {
Ort::SessionOptions session_options;
session_options.Add(custom_op_domain);

#ifdef USE_CUDA
auto cuda_options = CreateDefaultOrtCudaProviderOptionsWithCustomStream(cuda_compute_stream);
session_options.AppendExecutionProvider_CUDA(cuda_options);
#else
session_options.DisableCpuMemArena();
ORT_UNUSED_PARAMETER(cuda_compute_stream);
#endif
Ort::Session session(env, model_uri.c_str(), session_options);
Expand Down Expand Up @@ -65,7 +66,7 @@
}
}

#if !defined(ORT_MINIMAL_BUILD)
#if !defined(ORT_MINIMAL_BUILD) && !defined(ABSL_HAVE_ADDRESS_SANITIZER)
TEST(OrtFormatCustomOpTests, ConvertOnnxModelToOrt) {
const std::basic_string<ORTCHAR_T> onnx_file = ORT_TSTR("testdata/foo_1.onnx");
const std::basic_string<ORTCHAR_T> ort_file = ORT_TSTR("testdata/foo_1.onnx.test_output.ort");
Expand Down Expand Up @@ -120,7 +121,7 @@

// the saved ORT format model has the CPU EP assigned to the custom op node, so we only test if we're not using the
// CUDA EP for the test.
#ifndef USE_CUDA
#if !defined(USE_CUDA) && !defined(ABSL_HAVE_ADDRESS_SANITIZER)
TEST(OrtFormatCustomOpTests, LoadOrtModel) {
const std::basic_string<ORTCHAR_T> ort_file = ORT_TSTR("testdata/foo_1.onnx.ort");

Expand Down
Loading
Loading