Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP]EP interface #18090

Draft
wants to merge 36 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 13 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
cdb9676
draft interface hierarchy
RandyShuai Oct 25, 2023
2ddea77
commence interface namespace
RandyShuai Oct 25, 2023
5fffc82
add common folder
RandyShuai Oct 25, 2023
ff689b1
port code from PR 16718
jslhcl Oct 25, 2023
6c92fc7
Merge https://github.com/microsoft/onnxruntime into rashuai/EpInterface
jslhcl Oct 25, 2023
4794e3d
Merge branch 'rashuai/EpInterface' of https://github.com/microsoft/on…
jslhcl Oct 25, 2023
0d4d3d3
test_intreeEp.py works
jslhcl Oct 25, 2023
858f259
windows fixes
RandyShuai Oct 26, 2023
dbb9871
tune folder name
RandyShuai Oct 26, 2023
a732a85
tune kernel hierarchy
RandyShuai Oct 26, 2023
251a58b
define ITensor
RandyShuai Oct 27, 2023
e247fd8
add builder interface
RandyShuai Oct 27, 2023
021405f
postpone kernel creation
RandyShuai Oct 27, 2023
e433f5d
fix to make customEp2 running
RandyShuai Oct 30, 2023
4819293
add testing model
RandyShuai Oct 30, 2023
a2c247a
customize GetCapability() and Compile() in custom_ep2 and delete ort_…
jslhcl Nov 2, 2023
9b0f3b1
test for GetCapability and Compile for custom_ep2
jslhcl Nov 2, 2023
95e1fe6
Subgraph API support
jslhcl Nov 6, 2023
74c457d
add both in-tree and out-tree UT
RandyShuai Nov 6, 2023
4168d03
Merge branch 'rashuai/EpInterface' of https://github.com/microsoft/on…
RandyShuai Nov 6, 2023
3586734
drop dep on ort lib for custom ep
RandyShuai Nov 9, 2023
e440c88
use new graph API in openvino
jslhcl Nov 15, 2023
60161f2
Merge branch 'rashuai/EpInterface' of https://github.com/microsoft/on…
jslhcl Nov 15, 2023
572d5f3
align build args between ort and out-tree custom ep dll
RandyShuai Nov 16, 2023
8d0c804
Merge branch 'rashuai/EpInterface' of https://github.com/microsoft/on…
RandyShuai Nov 16, 2023
48dcc85
fix wide char path for win
RandyShuai Nov 17, 2023
7defe5d
add tensorshape interface
RandyShuai Nov 17, 2023
c04c1f2
align tensor hierarchy
RandyShuai Nov 20, 2023
839103a
support tensor sequence
RandyShuai Nov 23, 2023
1b1c069
move SubGraphDef into namespace interface and create OpenVINO EP from…
jslhcl Nov 28, 2023
5396642
move openvino out-tree EP
jslhcl Dec 9, 2023
539d679
update eigen checksum
RandyShuai Dec 12, 2023
5d450cb
openvino can be loaded and run as out-tree ep
jslhcl Dec 16, 2023
f8d6a70
Merge branch 'rashuai/EpInterface' of https://github.com/microsoft/on…
jslhcl Dec 16, 2023
970a3f2
openVINO as out-tree EP works on Detection model
jslhcl Dec 19, 2023
33a60f5
new member functions for EPv2
jslhcl Dec 28, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions cmake/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,7 @@ option(onnxruntime_TVM_CUDA_RUNTIME "Build TVM with CUDA support" OFF)
option(onnxruntime_TVM_USE_LLVM "Build TVM with LLVM. Set customized path to llvm-config.exe here if need" OFF)
option(onnxruntime_TVM_USE_HASH "Build ipp-crypto library for support hash algorithm. It is defined for TVM only")
option(onnxruntime_USE_XNNPACK "Build with XNNPACK support. Provides an alternative math library on ARM, WebAssembly and x86." OFF)
option(onnxruntime_USE_INTREE "InTree EP support" OFF)
option(onnxruntime_USE_WEBNN "Build with WebNN support. Enable hardware acceleration in web browsers." OFF)

# Options related to reducing the binary size produced by the build
Expand Down Expand Up @@ -800,6 +801,11 @@ if (onnxruntime_USE_XNNPACK)
list(APPEND ORT_PROVIDER_CMAKE_FLAGS -Donnxruntime_USE_XNNPACK=1)
list(APPEND ONNXRUNTIME_PROVIDER_NAMES xnnpack)
endif()
if (onnxruntime_USE_INTREE)
list(APPEND ORT_PROVIDER_FLAGS -DUSE_INTREE=1)
list(APPEND ORT_PROVIDER_CMAKE_FLAGS -Donnxruntime_USE_INTREE=1)
list(APPEND ONNXRUNTIME_PROVIDER_NAMES intree)
endif()
if (onnxruntime_USE_WEBNN)
list(APPEND ORT_PROVIDER_FLAGS -DUSE_WEBNN=1)
list(APPEND ORT_PROVIDER_CMAKE_FLAGS -Donnxruntime_USE_WEBNN=1)
Expand Down
7 changes: 7 additions & 0 deletions cmake/onnxruntime_providers.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,9 @@ endif()
if (onnxruntime_USE_XNNPACK)
set(PROVIDERS_XNNPACK onnxruntime_providers_xnnpack)
endif()
if (onnxruntime_USE_INTREE)
set(PROVIDERS_INTREE onnxruntime_providers_intree)
endif()
if(onnxruntime_USE_WEBNN)
set(PROVIDERS_WEBNN onnxruntime_providers_webnn)
endif()
Expand Down Expand Up @@ -196,6 +199,10 @@ if (onnxruntime_USE_XNNPACK)
include(onnxruntime_providers_xnnpack.cmake)
endif()

if (onnxruntime_USE_INTREE)
include(onnxruntime_providers_intree.cmake)
endif()

if (onnxruntime_USE_CANN)
include(onnxruntime_providers_cann.cmake)
endif()
Expand Down
30 changes: 30 additions & 0 deletions cmake/onnxruntime_providers_intree.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.

add_compile_definitions(USE_INTREE=1)

file(GLOB_RECURSE onnxruntime_providers_intree_cc_srcs CONFIGURE_DEPENDS
"${ONNXRUNTIME_INCLUDE_DIR}/core/providers/intree/*.h"
"${ONNXRUNTIME_ROOT}/core/providers/intree/*.h"
"${ONNXRUNTIME_ROOT}/core/providers/intree/*.cc"
)

source_group(TREE ${REPO_ROOT} FILES ${onnxruntime_providers_intree_cc_srcs})
onnxruntime_add_static_library(onnxruntime_providers_intree ${onnxruntime_providers_intree_cc_srcs})
onnxruntime_add_include_to_target(onnxruntime_providers_intree
onnxruntime_common onnxruntime_framework onnx pthreadpool Boost::mp11 safeint_interface
)

add_dependencies(onnxruntime_providers_intree onnx ${onnxruntime_EXTERNAL_DEPENDENCIES})
set_target_properties(onnxruntime_providers_intree PROPERTIES FOLDER "ONNXRuntime")

set_target_properties(onnxruntime_providers_intree PROPERTIES LINKER_LANGUAGE CXX)
#target_include_directories(onnxruntime_providers_intree PUBLIC "/bert_ort/leca/code/onnxruntime2/include/onnxruntime")

if (NOT onnxruntime_BUILD_SHARED_LIB)
install(TARGETS onnxruntime_providers_intree
ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR}
FRAMEWORK DESTINATION ${CMAKE_INSTALL_BINDIR})
endif()
7 changes: 4 additions & 3 deletions cmake/onnxruntime_python.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -71,9 +71,9 @@ onnxruntime_add_shared_library_module(onnxruntime_pybind11_state ${onnxruntime_p

if(MSVC)
target_compile_options(onnxruntime_pybind11_state PRIVATE "$<$<COMPILE_LANGUAGE:CUDA>:SHELL:--compiler-options /utf-8>" "$<$<NOT:$<COMPILE_LANGUAGE:CUDA>>:/utf-8>")
if(onnxruntime_ENABLE_TRAINING)
target_compile_options(onnxruntime_pybind11_state PRIVATE "/bigobj")
endif()
#if(onnxruntime_ENABLE_TRAINING)
target_compile_options(onnxruntime_pybind11_state PRIVATE "/bigobj")
#endif()
endif()
if(HAS_CAST_FUNCTION_TYPE)
target_compile_options(onnxruntime_pybind11_state PRIVATE "-Wno-cast-function-type")
Expand Down Expand Up @@ -179,6 +179,7 @@ target_link_libraries(onnxruntime_pybind11_state PRIVATE
${PROVIDERS_ACL}
${PROVIDERS_ARMNN}
${PROVIDERS_XNNPACK}
${PROVIDERS_INTREE}
${PROVIDERS_AZURE}
${PROVIDERS_QNN}
onnxruntime_optimizer
Expand Down
46 changes: 45 additions & 1 deletion include/onnxruntime/core/framework/allocator.h
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,12 @@
#pragma once

#include "core/common/common.h"
#include "core/framework/allocator_stats.h"
#include "core/session/onnxruntime_c_api.h"
#include "ortdevice.h"
#include "ortmemoryinfo.h"
#include <map>
#include <string>

Check warning on line 11 in include/onnxruntime/core/framework/allocator.h

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] include/onnxruntime/core/framework/allocator.h#L11

Found C++ system header after other header. Should be: allocator.h, c system, c++ system, other. [build/include_order] [4]
Raw output
include/onnxruntime/core/framework/allocator.h:11:  Found C++ system header after other header. Should be: allocator.h, c system, c++ system, other.  [build/include_order] [4]
#include <sstream>

Check warning on line 12 in include/onnxruntime/core/framework/allocator.h

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] include/onnxruntime/core/framework/allocator.h#L12

Found C++ system header after other header. Should be: allocator.h, c system, c++ system, other. [build/include_order] [4]
Raw output
include/onnxruntime/core/framework/allocator.h:12:  Found C++ system header after other header. Should be: allocator.h, c system, c++ system, other.  [build/include_order] [4]

// This configures the arena based allocator used by ORT
// See docs/C_API.md for details on what these mean and how to choose these values
Expand Down Expand Up @@ -38,6 +39,49 @@
};

namespace onnxruntime {
// Runtime statistics collected by an allocator.
struct AllocatorStats {
int64_t num_allocs; // Number of allocations.
int64_t num_reserves; // Number of reserves. (Number of calls to Reserve() in arena-based allocators)
int64_t num_arena_extensions; // Number of arena extensions (Relevant only for arena based allocators)
int64_t num_arena_shrinkages; // Number of arena shrinkages (Relevant only for arena based allocators)
int64_t bytes_in_use; // Number of bytes in use.
int64_t total_allocated_bytes; // The total number of allocated bytes by the allocator.
int64_t max_bytes_in_use; // The maximum bytes in use.
int64_t max_alloc_size; // The max single allocation seen.
// The upper limit what the allocator can allocate, if such a limit
// is known. Certain allocator may return 0 to indicate the limit is
// unknown.
int64_t bytes_limit;

AllocatorStats() { Clear(); }

void Clear() {
this->num_allocs = 0;
this->num_reserves = 0;
this->num_arena_extensions = 0;
this->num_arena_shrinkages = 0;
this->bytes_in_use = 0;
this->max_bytes_in_use = 0;
this->max_alloc_size = 0;
this->bytes_limit = 0;
this->total_allocated_bytes = 0;
}

std::string DebugString() const {
std::ostringstream ss;
ss << "Limit: " << this->bytes_limit << "\n"
<< "InUse: " << this->bytes_in_use << "\n"
<< "TotalAllocated: " << this->total_allocated_bytes << "\n"
<< "MaxInUse: " << this->max_bytes_in_use << "\n"
<< "NumAllocs: " << this->num_allocs << "\n"
<< "NumReserves: " << this->num_reserves << "\n"
<< "NumArenaExtensions: " << this->num_arena_extensions << "\n"
<< "NumArenaShrinkages: " << this->num_arena_shrinkages << "\n"
<< "MaxAllocSize: " << this->max_alloc_size << "\n";
return ss.str();
}
};
constexpr const char* CPU = "Cpu";
constexpr const char* CUDA = "Cuda";
constexpr const char* CUDA_PINNED = "CudaPinned";
Expand Down
18 changes: 1 addition & 17 deletions include/onnxruntime/core/framework/execution_provider.h
Original file line number Diff line number Diff line change
Expand Up @@ -27,30 +27,14 @@ class Node;
#include "core/common/basic_types.h"
#include "core/common/profiler_common.h"
#include "core/framework/allocator_utils.h"
#include "core/framework/func_api.h"
#include "core/framework/node_compute_info.h"
#include "core/framework/provider_options.h"
#include "core/framework/framework_provider_common.h"
#include "core/framework/stream_handles.h"
#include "core/framework/tuning_context.h"

namespace onnxruntime {

/**
Logical device representation.
*/

// if we are export the fused function to dll, the function will still in the same binary as onnxruntime
// use std function to give execution provider some chance to capture some state.
using CreateFunctionStateFunc = std::function<int(ComputeContext*, FunctionState*)>;
using ComputeFunc = std::function<Status(FunctionState, const OrtApi*, OrtKernelContext*)>;
using DestroyFunctionStateFunc = std::function<void(FunctionState)>;

struct NodeComputeInfo {
CreateFunctionStateFunc create_state_func;
ComputeFunc compute_func;
DestroyFunctionStateFunc release_state_func;
};

enum class DataLayout {
NCHW,
NHWC,
Expand Down
4 changes: 4 additions & 0 deletions include/onnxruntime/core/framework/kernel_def_builder.h
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,10 @@ class KernelDef {
return type_constraints_;
}

std::unordered_map<std::string, std::vector<MLDataType>>& TypeConstraints() {
return type_constraints_;
}

const std::vector<std::pair<int, int>>& MayInplace() const {
return inplace_map_;
}
Expand Down
4 changes: 4 additions & 0 deletions include/onnxruntime/core/framework/kernel_registry.h
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,10 @@ class KernelRegistry {
return kernel_creator_fn_map_;
}

KernelCreateMap& GetKernelCreateMap() {
return kernel_creator_fn_map_;
}

private:
// TryFindKernel implementation. Either kernel_type_str_resolver or type_constraints is provided.
Status TryFindKernelImpl(const Node& node, ProviderType exec_provider,
Expand Down
17 changes: 17 additions & 0 deletions include/onnxruntime/core/framework/node_compute_info.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#pragma once

Check warning

Code scanning / lintrunner

CLANGFORMAT/format Warning

See https://clang.llvm.org/docs/ClangFormat.html.
Run lintrunner -a to apply this patch.

#include "core/framework/func_api.h"

namespace onnxruntime {
// if we are export the fused function to dll, the function will still in the same binary as onnxruntime
// use std function to give execution provider some chance to capture some state.
using CreateFunctionStateFunc = std::function<int(ComputeContext*, FunctionState*)>;
using ComputeFunc = std::function<Status(FunctionState, const OrtApi*, OrtKernelContext*)>;
using DestroyFunctionStateFunc = std::function<void(FunctionState)>;

struct NodeComputeInfo {
CreateFunctionStateFunc create_state_func;
ComputeFunc compute_func;
DestroyFunctionStateFunc release_state_func;
};
}

Check warning on line 17 in include/onnxruntime/core/framework/node_compute_info.h

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] include/onnxruntime/core/framework/node_compute_info.h#L17

Namespace should be terminated with "// namespace onnxruntime" [readability/namespace] [5]
Raw output
include/onnxruntime/core/framework/node_compute_info.h:17:  Namespace should be terminated with "// namespace onnxruntime"  [readability/namespace] [5]
6 changes: 5 additions & 1 deletion include/onnxruntime/core/framework/op_kernel.h
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,15 @@
#include "core/graph/graph_viewer.h"
#include "core/graph/onnx_protobuf.h"
#include "core/common/gsl.h"

namespace onnxruntime {
class OpKernelContext;
}
#endif

#include "interface/framework/kernel.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to include kernel.h here? I don't see any class declared in kernel.h in the changes of this file

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we probably shouldn't include interface/kernel here, as this file is for the internal kernel impl



namespace onnxruntime {

std::unique_ptr<OpKernelInfo> CopyOpKernelInfo(const OpKernelInfo& info);
Expand All @@ -39,7 +43,7 @@ class OpKernel {
public:
using DoneCallback = std::function<void()>;

explicit OpKernel(const OpKernelInfo& info) : op_kernel_info_(CopyOpKernelInfo(info)) {}
explicit OpKernel(const OpKernelInfo& info): op_kernel_info_(CopyOpKernelInfo(info)) {}
virtual ~OpKernel() = default;

const onnxruntime::Node& Node() const;
Expand Down
26 changes: 25 additions & 1 deletion include/onnxruntime/core/framework/op_kernel_context.h
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.

#include "interface/framework/kernel.h"

namespace onnxruntime {
class IExecutionFrame;
class Stream;
namespace concurrency {
class ThreadPool;
}

class OpKernelContext {
class OpKernelContext : public interface::IKernelContext {
public:
using ArgMap = std::unordered_map<std::string, size_t>;

Expand Down Expand Up @@ -43,7 +45,15 @@
}
}

const void* InputData(int index) const override {
//todo - check tensor type

Check warning on line 49 in include/onnxruntime/core/framework/op_kernel_context.h

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] include/onnxruntime/core/framework/op_kernel_context.h#L49

Should have a space between // and comment [whitespace/comments] [4]
Raw output
include/onnxruntime/core/framework/op_kernel_context.h:49:  Should have a space between // and comment  [whitespace/comments] [4]
const auto* tensor = Input<onnxruntime::Tensor>(index);
return tensor->DataRaw();
}

// Fetch a required input, enforcing that it is present.
// Fetch a required input, enforcing that it is present. Fetch a required input, enforcing that it is present.
// Fetch a required input, enforc Fetch a required input, enforcing that it is present.
template <typename T>
const T& RequiredInput(int index) const {
const T* input_ptr = Input<T>(index);
Expand All @@ -68,6 +78,20 @@
Tensor* Output(int index, const std::vector<int64_t>& shape);
Tensor* Output(int index, const std::initializer_list<int64_t>& shape);

void* AllocateOutput(int index, const interface::TensorShape& shape) override {
auto* tensor = Output(index, shape);
ORT_ENFORCE(tensor);
return tensor->MutableDataRaw();
}

const int64_t* InputShape(int index, size_t* num_dims) const override {
const auto* tensor = Input<onnxruntime::Tensor>(index);
const auto& shape = tensor->Shape();
auto dims = shape.GetDims();
*num_dims = dims.size();
return dims.data();
};

// Fetch a required tensor output, enforcing that it is present.
Tensor& RequiredOutput(int index, const TensorShape& shape) {
Tensor* output_ptr = Output(index, shape);
Expand Down
3 changes: 2 additions & 1 deletion include/onnxruntime/core/framework/op_kernel_info.h
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
#include "core/framework/op_node_proto_helper.h"
#include "core/graph/graph_viewer.h"
#include "core/common/gsl.h"
#include "interface/framework/kernel.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to include kernel.h here?


namespace onnxruntime {

Expand All @@ -20,7 +21,7 @@ struct AllocPlanPerValue;
// A very light-weight class, which works as an aggregated
// view of all data needed for constructing a Kernel instance.
// NOTE: it does not own/hold any objects.
class OpKernelInfo : public OpNodeProtoHelper<ProtoHelperNodeContext> {
class OpKernelInfo : public OpNodeProtoHelper<ProtoHelperNodeContext>, public interface::IKernelInfo {
public:
explicit OpKernelInfo(const onnxruntime::Node& node,
const KernelDef& kernel_def,
Expand Down
1 change: 1 addition & 0 deletions include/onnxruntime/core/graph/constants.h
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ constexpr const char* kXnnpackExecutionProvider = "XnnpackExecutionProvider";
constexpr const char* kWebNNExecutionProvider = "WebNNExecutionProvider";
constexpr const char* kCannExecutionProvider = "CANNExecutionProvider";
constexpr const char* kAzureExecutionProvider = "AzureExecutionProvider";
constexpr const char* kInTreeExecutionProvider = "InTreeExecutionProvider";

constexpr const char* kExecutionProviderSharedLibraryPath = "shared_lib_path";
constexpr const char* kExecutionProviderSharedLibraryEntry = "provider_factory_entry_point";
Expand Down
25 changes: 25 additions & 0 deletions include/onnxruntime/core/session/environment.h
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,29 @@
#include "core/platform/threadpool.h"
#include "core/common/logging/logging.h"
#include "core/framework/allocator.h"
#include "interface/provider/provider.h"

struct OrtThreadingOptions;
namespace onnxruntime {
struct ProviderLibrary2 {
ProviderLibrary2(const ORTCHAR_T* library_path) : library_path_{library_path} {}

Check warning on line 18 in include/onnxruntime/core/session/environment.h

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] include/onnxruntime/core/session/environment.h#L18

Single-parameter constructors should be marked explicit. [runtime/explicit] [5]
Raw output
include/onnxruntime/core/session/environment.h:18:  Single-parameter constructors should be marked explicit.  [runtime/explicit] [5]
~ProviderLibrary2() {
// assert(!handle_); // We should already be unloaded at this point (disabled until Python shuts down deterministically)

Check warning on line 20 in include/onnxruntime/core/session/environment.h

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] include/onnxruntime/core/session/environment.h#L20

Lines should be <= 120 characters long [whitespace/line_length] [2]
Raw output
include/onnxruntime/core/session/environment.h:20:  Lines should be <= 120 characters long  [whitespace/line_length] [2]
}

void Load();

void Unload();

interface::ExecutionProvider* CreateExternalEPInstance(const std::unordered_map<std::string, std::string>& provider_options);

Check warning on line 27 in include/onnxruntime/core/session/environment.h

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] include/onnxruntime/core/session/environment.h#L27

Lines should be <= 120 characters long [whitespace/line_length] [2]
Raw output
include/onnxruntime/core/session/environment.h:27:  Lines should be <= 120 characters long  [whitespace/line_length] [2]

private:
const ORTCHAR_T* library_path_;
void* handle_{};

ORT_DISALLOW_COPY_AND_ASSIGNMENT(ProviderLibrary2);
};

/** TODO: remove this class
Provides the runtime environment for onnxruntime.
Create one instance for the duration of execution.
Expand Down Expand Up @@ -88,6 +108,10 @@
*/
Status CreateAndRegisterAllocatorV2(const std::string& provider_type, const OrtMemoryInfo& mem_info, const std::unordered_map<std::string, std::string>& options, const OrtArenaCfg* arena_cfg = nullptr);

Status LoadExternalExecutionProvider(const std::string& provider_type, const std::string& library_path);

interface::ExecutionProvider* CreateExternalEPInstance(const std::string& provider_type, const std::unordered_map<std::string, std::string>& provider_options);

Check warning on line 113 in include/onnxruntime/core/session/environment.h

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] include/onnxruntime/core/session/environment.h#L113

Lines should be <= 120 characters long [whitespace/line_length] [2]
Raw output
include/onnxruntime/core/session/environment.h:113:  Lines should be <= 120 characters long  [whitespace/line_length] [2]

private:
ORT_DISALLOW_COPY_ASSIGNMENT_AND_MOVE(Environment);
Status Initialize(std::unique_ptr<logging::LoggingManager> logging_manager,
Expand All @@ -99,5 +123,6 @@
std::unique_ptr<onnxruntime::concurrency::ThreadPool> inter_op_thread_pool_;
bool create_global_thread_pools_{false};
std::vector<AllocatorPtr> shared_allocators_;
std::unordered_map<std::string, std::unique_ptr<ProviderLibrary2>> external_ep_libs_;

Check warning on line 126 in include/onnxruntime/core/session/environment.h

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] include/onnxruntime/core/session/environment.h#L126

Add #include <string> for string [build/include_what_you_use] [4]
Raw output
include/onnxruntime/core/session/environment.h:126:  Add #include <string> for string  [build/include_what_you_use] [4]

Check warning on line 126 in include/onnxruntime/core/session/environment.h

View workflow job for this annotation

GitHub Actions / cpplint

[cpplint] include/onnxruntime/core/session/environment.h#L126

Add #include <unordered_map> for unordered_map<> [build/include_what_you_use] [4]
Raw output
include/onnxruntime/core/session/environment.h:126:  Add #include <unordered_map> for unordered_map<>  [build/include_what_you_use] [4]
};
} // namespace onnxruntime
Loading
Loading