Skip to content

Commit

Permalink
Fix typos according to reviewdog report. (#21335)
Browse files Browse the repository at this point in the history
### Description
Fix typos based on reviewdog report but with some
exceptions/corrections.
  • Loading branch information
mindest authored Jul 22, 2024
1 parent 4e75605 commit 5b9369e
Show file tree
Hide file tree
Showing 189 changed files with 380 additions and 360 deletions.
2 changes: 1 addition & 1 deletion .gitattributes
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# This sets the default behaviour, overriding core.autocrlf
# This sets the default behavior, overriding core.autocrlf
* text=auto

# All source files should have unix line-endings in the repository,
Expand Down
2 changes: 1 addition & 1 deletion ThirdPartyNotices.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4820,7 +4820,7 @@ SOFTWARE.

----------------------------------------------------------------------------

This is the MIT/Expat Licence. For more information see:
This is the MIT/Expat License. For more information see:

1. http://www.opensource.org/licenses/mit-license.php

Expand Down
2 changes: 1 addition & 1 deletion cmake/onnxruntime.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ endif()

if(CMAKE_SYSTEM_NAME STREQUAL "Android" AND onnxruntime_MINIMAL_BUILD)
# target onnxruntime is a shared library, the dummy __cxa_demangle is only attach to it to avoid
# affecting downstream ort library users with the behaviour of dummy __cxa_demangle. So the dummy
# affecting downstream ort library users with the behavior of dummy __cxa_demangle. So the dummy
# __cxa_demangle must not expose to libonnxruntime_common.a. It works as when the linker is
# creating the DSO, our dummy __cxa_demangle always comes before libc++abi.a so the
# __cxa_demangle in libc++abi.a is discarded, thus, huge binary size reduction.
Expand Down
2 changes: 1 addition & 1 deletion cmake/patches/composable_kernel/Fix_Clang_Build.patch
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ index c23746e7f..bc326c8b5 100644
find_package(HIP REQUIRED)
# Override HIP version in config.h, if necessary.
@@ -269,12 +248,6 @@ if( DEFINED CK_OVERRIDE_HIP_VERSION_PATCH )
message(STATUS "CK_HIP_VERSION_PATCH overriden with ${CK_OVERRIDE_HIP_VERSION_PATCH}")
message(STATUS "CK_HIP_VERSION_PATCH overridden with ${CK_OVERRIDE_HIP_VERSION_PATCH}")
endif()
message(STATUS "Build with HIP ${HIP_VERSION}")
-link_libraries(hip::device)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Event {{name.0.value}}
Operator {{name.0.value}}
{{/inOperator}}
{{#inEii}}
Explict Interface Implementation {{name.0.value}}
Explicit Interface Implementation {{name.0.value}}
{{/inEii}}
{{#inVariable}}
Variable {{name.0.value}}
Expand Down
4 changes: 2 additions & 2 deletions dockerfiles/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
docker run -it onnxruntime-source
```

The docker file supports both x86_64 and ARM64(aarch64). You may use docker's "--platform" parameter to explictly specify which CPU architecture you want to build. For example:
The docker file supports both x86_64 and ARM64(aarch64). You may use docker's "--platform" parameter to explicitly specify which CPU architecture you want to build. For example:

```bash
docker build --platform linux/arm64/v8 -f Dockerfile.source
Expand Down Expand Up @@ -274,7 +274,7 @@ Note: You may add --use_tensorrt and --tensorrt_home options if you wish to use
Note: Resulting Docker image will have ONNX Runtime installed in /usr, and ONNX Runtime wheel copied to /onnxruntime directory.
Nothing else from ONNX Runtime source tree will be copied/installed to the image.

Note: When running the container you built in Docker, please either use 'nvidia-docker' command instead of 'docker', or use Docker command-line options to make sure NVIDIA runtime will be used and appropiate files mounted from host. Otherwise, CUDA libraries won't be found. You can also [set NVIDIA runtime as default in Docker](https://github.com/dusty-nv/jetson-containers#docker-default-runtime).
Note: When running the container you built in Docker, please either use 'nvidia-docker' command instead of 'docker', or use Docker command-line options to make sure NVIDIA runtime will be used and appropriate files mounted from host. Otherwise, CUDA libraries won't be found. You can also [set NVIDIA runtime as default in Docker](https://github.com/dusty-nv/jetson-containers#docker-default-runtime).

## MIGraphX
**Ubuntu 20.04, ROCm6.0, MIGraphX**
Expand Down
4 changes: 2 additions & 2 deletions docs/python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, please follow the [Azure ML configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) to set up your environment.\n",
"\n",
"### Install additional packages needed for this Notebook\n",
"You need to install the popular plotting library matplotlib, the image manipulation library opencv, and the onnx library in the conda environment where Azure Maching Learning SDK is installed.\n",
"You need to install the popular plotting library matplotlib, the image manipulation library opencv, and the onnx library in the conda environment where Azure Machine Learning SDK is installed.\n",
"\n",
"```\n",
"(myenv) $ pip install matplotlib onnx opencv-python\n",
Expand All @@ -79,7 +79,7 @@
"source": [
"## 1. Obtain a model from the ONNX Model Zoo\n",
"\n",
"For more information on the Facial Emotion Recognition (FER+) model, you can explore the notebook explaning how to deploy [FER+ with ONNX Runtime on an ACI Instance](onnx-inference-facial-expression-recognition-deploy.ipynb)."
"For more information on the Facial Emotion Recognition (FER+) model, you can explore the notebook explaining how to deploy [FER+ with ONNX Runtime on an ACI Instance](onnx-inference-facial-expression-recognition-deploy.ipynb)."
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1129,7 +1129,7 @@ class ThreadPoolTempl : public onnxruntime::concurrency::ExtendedThreadPoolInter
//
// Ensure that the ThreadPoolParallelSection has sufficient workers to
// execute a loop with degree of parallelism n. We track the number
// of workers already avaiable to the parallel section, prior to
// of workers already available to the parallel section, prior to
// submitting tasks to the work queues to make up the total.
//
// Each worker will call in to worker_fn(idx) with a per-worker thread
Expand Down
12 changes: 8 additions & 4 deletions include/onnxruntime/core/providers/cuda/cuda_context.h
Original file line number Diff line number Diff line change
Expand Up @@ -53,21 +53,25 @@ struct CudaContext : public CustomOpContext {
cudnn_conv_use_max_workspace = FetchResource<bool>(kernel_ctx, CudaResource::cudnn_conv_use_max_workspace_t);

cudnn_conv1d_pad_to_nc1d = FetchResource<bool>(kernel_ctx, CudaResource::cudnn_conv1d_pad_to_nc1d_t);
enable_skip_layer_norm_strict_mode = FetchResource<bool>(kernel_ctx, CudaResource::enable_skip_layer_norm_strict_mode_t);
enable_skip_layer_norm_strict_mode = FetchResource<bool>(
kernel_ctx, CudaResource::enable_skip_layer_norm_strict_mode_t);
prefer_nhwc = FetchResource<bool>(kernel_ctx, CudaResource::prefer_nhwc_t);
use_tf32 = FetchResource<bool>(kernel_ctx, CudaResource::use_tf32_t);
}

template <typename T>
T FetchResource(const OrtKernelContext& kernel_ctx, CudaResource resource_type) {
if constexpr (sizeof(T) > sizeof(void*)) {
ORT_CXX_API_THROW("void* is not large enough to hold resource type: " + std::to_string(resource_type), OrtErrorCode::ORT_INVALID_ARGUMENT);
ORT_CXX_API_THROW("void* is not large enough to hold resource type: " + std::to_string(resource_type),
OrtErrorCode::ORT_INVALID_ARGUMENT);
}
const auto& ort_api = Ort::GetApi();
void* resource = {};
OrtStatus* status = ort_api.KernelContext_GetResource(&kernel_ctx, ORT_CUDA_RESOUCE_VERSION, resource_type, &resource);
OrtStatus* status = ort_api.KernelContext_GetResource(
&kernel_ctx, ORT_CUDA_RESOURCE_VERSION, resource_type, &resource);
if (status) {
ORT_CXX_API_THROW("Failed to fetch cuda ep resource, resouce type: " + std::to_string(resource_type), OrtErrorCode::ORT_RUNTIME_EXCEPTION);
ORT_CXX_API_THROW("Failed to fetch cuda ep resource, resource type: " + std::to_string(resource_type),
OrtErrorCode::ORT_RUNTIME_EXCEPTION);
}
T t = {};
memcpy(&t, &resource, sizeof(T));
Expand Down
2 changes: 1 addition & 1 deletion include/onnxruntime/core/providers/cuda/cuda_resource.h
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

#include "core/providers/resource.h"

#define ORT_CUDA_RESOUCE_VERSION 3
#define ORT_CUDA_RESOURCE_VERSION 3

enum CudaResource : int {
cuda_stream_t = cuda_resource_offset, // 10000
Expand Down
9 changes: 6 additions & 3 deletions include/onnxruntime/core/providers/rocm/rocm_context.h
Original file line number Diff line number Diff line change
Expand Up @@ -23,21 +23,24 @@ struct RocmContext : public CustomOpContext {
void* resource = {};
OrtStatus* status = nullptr;

status = ort_api.KernelContext_GetResource(&kernel_ctx, ORT_ROCM_RESOUCE_VERSION, RocmResource::hip_stream_t, &resource);
status = ort_api.KernelContext_GetResource(
&kernel_ctx, ORT_ROCM_RESOURCE_VERSION, RocmResource::hip_stream_t, &resource);
if (status) {
ORT_CXX_API_THROW("failed to fetch hip stream", OrtErrorCode::ORT_RUNTIME_EXCEPTION);
}
hip_stream = reinterpret_cast<hipStream_t>(resource);

resource = {};
status = ort_api.KernelContext_GetResource(&kernel_ctx, ORT_ROCM_RESOUCE_VERSION, RocmResource::miopen_handle_t, &resource);
status = ort_api.KernelContext_GetResource(
&kernel_ctx, ORT_ROCM_RESOURCE_VERSION, RocmResource::miopen_handle_t, &resource);
if (status) {
ORT_CXX_API_THROW("failed to fetch miopen handle", OrtErrorCode::ORT_RUNTIME_EXCEPTION);
}
miopen_handle = reinterpret_cast<miopenHandle_t>(resource);

resource = {};
status = ort_api.KernelContext_GetResource(&kernel_ctx, ORT_ROCM_RESOUCE_VERSION, RocmResource::rocblas_handle_t, &resource);
status = ort_api.KernelContext_GetResource(
&kernel_ctx, ORT_ROCM_RESOURCE_VERSION, RocmResource::rocblas_handle_t, &resource);
if (status) {
ORT_CXX_API_THROW("failed to fetch rocblas handle", OrtErrorCode::ORT_RUNTIME_EXCEPTION);
}
Expand Down
2 changes: 1 addition & 1 deletion include/onnxruntime/core/providers/rocm/rocm_resource.h
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

#include "core/providers/resource.h"

#define ORT_ROCM_RESOUCE_VERSION 1
#define ORT_ROCM_RESOURCE_VERSION 1

enum RocmResource : int {
hip_stream_t = rocm_resource_offset,
Expand Down
19 changes: 10 additions & 9 deletions include/onnxruntime/core/session/onnxruntime_c_api.h
Original file line number Diff line number Diff line change
Expand Up @@ -473,13 +473,13 @@ typedef struct OrtCUDAProviderOptions {

/** \brief Enable TunableOp for using.
* Set it to 1/0 to enable/disable TunableOp. Otherwise, it is disabled by default.
* This option can be overriden by environment variable ORT_CUDA_TUNABLE_OP_ENABLE.
* This option can be overridden by environment variable ORT_CUDA_TUNABLE_OP_ENABLE.
*/
int tunable_op_enable;

/** \brief Enable TunableOp for tuning.
* Set it to 1/0 to enable/disable TunableOp tuning. Otherwise, it is disabled by default.
* This option can be overriden by environment variable ORT_CUDA_TUNABLE_OP_TUNING_ENABLE.
* This option can be overridden by environment variable ORT_CUDA_TUNABLE_OP_TUNING_ENABLE.
*/
int tunable_op_tuning_enable;

Expand Down Expand Up @@ -562,13 +562,13 @@ typedef struct OrtROCMProviderOptions {

/** \brief Enable TunableOp for using.
* Set it to 1/0 to enable/disable TunableOp. Otherwise, it is disabled by default.
* This option can be overriden by environment variable ORT_ROCM_TUNABLE_OP_ENABLE.
* This option can be overridden by environment variable ORT_ROCM_TUNABLE_OP_ENABLE.
*/
int tunable_op_enable;

/** \brief Enable TunableOp for tuning.
* Set it to 1/0 to enable/disable TunableOp tuning. Otherwise, it is disabled by default.
* This option can be overriden by environment variable ORT_ROCM_TUNABLE_OP_TUNING_ENABLE.
* This option can be overridden by environment variable ORT_ROCM_TUNABLE_OP_TUNING_ENABLE.
*/
int tunable_op_tuning_enable;

Expand Down Expand Up @@ -2798,7 +2798,7 @@ struct OrtApi {
* "initial_growth_chunk_size_bytes": (Possible) Size of the second allocation in the arena.
* Only relevant if arena strategy is `kNextPowerOfTwo`. Use -1 to allow ORT to choose the default.
* "max_power_of_two_extend_bytes": The maximum enxtend size if arena strategy is `kNextPowerOfTwo`.
* It is not an allocation limit, it is only a limit for extention when requested byte is less than the limit.
* It is not an allocation limit, it is only a limit for extension when requested byte is less than the limit.
* When requested bytes is more than the limit, allocator will still return as requested.
* Use -1 to allow ORT to choose the default 1GB for max_power_of_two_extend_bytes.
* Ultimately, the allocation size is determined by the allocation memory request.
Expand Down Expand Up @@ -4467,13 +4467,14 @@ struct OrtApi {
* E.g. a cuda stream or a cublas handle
*
* \param context - Kernel context
* \param resouce_version - Version of the resource
* \param resource_version - Version of the resource
* \param resource_id - Type of resource
* \param resource - A pointer to returned resource
*
* \since Version 1.16.
*/
ORT_API2_STATUS(KernelContext_GetResource, _In_ const OrtKernelContext* context, _In_ int resouce_version, _In_ int resource_id, _Outptr_ void** resource);
ORT_API2_STATUS(KernelContext_GetResource, _In_ const OrtKernelContext* context, _In_ int resource_version,
_In_ int resource_id, _Outptr_ void** resource);

/** \brief Set user logging function
*
Expand Down Expand Up @@ -4528,10 +4529,10 @@ struct OrtApi {
ORT_API2_STATUS(ShapeInferContext_GetAttribute, _In_ const OrtShapeInferContext* context, _In_ const char* attr_name, _Outptr_ const OrtOpAttr** attr);

/**
* Set type and shape info of an ouput
* Set type and shape info of an output
*
* \param[in] context
* \param[in] index The index of the ouput
* \param[in] index The index of the output
* \param[out] info Type shape info of the output
*
* \since Version 1.17.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -403,7 +403,7 @@ using Variadic = TensorArray;
Note:
OrtLiteCustomOp inherits from OrtCustomOp to bridge tween a custom func/struct and ort core.
The lifetime of an OrtLiteCustomOp instance is managed by customer code, not ort, so:
1. DO NOT cast OrtLiteCustomOp to OrtCustomOp and release since there is no virtual destructor in the hierachy.
1. DO NOT cast OrtLiteCustomOp to OrtCustomOp and release since there is no virtual destructor in the hierarchy.
2. OrtLiteCustomFunc and OrtLiteCustomStruct, as two sub-structs, can be released in form of OrtLiteCustomOp since all members are kept in the OrtLiteCustomOp,
hence memory could still be recycled properly.
Further, OrtCustomOp is a c struct bearing no v-table, so offspring structs are by design to be of zero virtual functions to maintain cast safety.
Expand Down
2 changes: 1 addition & 1 deletion java/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ java {
targetCompatibility = JavaVersion.VERSION_1_8
}

// This jar tasks serves as a CMAKE signalling
// This jar tasks serves as a CMAKE signaling
// mechanism. The jar will be overwritten by allJar task
jar {
}
Expand Down
2 changes: 1 addition & 1 deletion java/src/main/java/ai/onnxruntime/OnnxRuntime.java
Original file line number Diff line number Diff line change
Expand Up @@ -438,7 +438,7 @@ private static String mapLibraryName(String library) {
/**
* Extracts the providers array from the C API, converts it into an EnumSet.
*
* <p>Throws IllegalArgumentException if a provider isn't recognised (note this exception should
* <p>Throws IllegalArgumentException if a provider isn't recognized (note this exception should
* only happen during development of ONNX Runtime, if it happens at any other point, file an issue
* on <a href="https://github.com/microsoft/onnxruntime">GitHub</a>).
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@
* Licensed under the MIT License.
*/

/** Classes for controlling the behaviour of ONNX Runtime Execution Providers. */
/** Classes for controlling the behavior of ONNX Runtime Execution Providers. */
package ai.onnxruntime.providers;
2 changes: 1 addition & 1 deletion java/src/test/java/sample/ScoreMNIST.java
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ public static void writeDataSKL(float[][] data, int[] indices, float[] values) {
/**
* Find the maximum probability and return it's index.
*
* @param probabilities The probabilites.
* @param probabilities The probabilities.
* @return The index of the max.
*/
public static int pred(float[] probabilities) {
Expand Down
2 changes: 1 addition & 1 deletion js/web/lib/onnxjs/backends/webgl/glsl-coordinate-lib.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1234,7 +1234,7 @@ export class CoordsGlslLib extends GlslLib {
}

/**
* This is the main function to map from the given texture coordiantes (s,t)
* This is the main function to map from the given texture coordinates (s,t)
* to logical indices for the output
* There will only be one single variation of this
* Also see coordsToOffset and offsetToIndices for input-specific versions
Expand Down
2 changes: 1 addition & 1 deletion js/web/lib/onnxjs/backends/webgl/ops/pack.ts
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ function getOutOfBoundsCondition(rank: number, shape: readonly number[], dims: s
}

/**
* code snippet to sample input texture with output coordiantes
* code snippet to sample input texture with output coordinates
*/
function getOutput(shape: readonly number[], dims: string[]): string {
const rank = shape.length;
Expand Down
2 changes: 1 addition & 1 deletion onnxruntime/contrib_ops/cpu/attnlstm/deep_cpu_attn_lstm.h
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ using onnxruntime::rnn::detail::Direction;
using onnxruntime::rnn::detail::MakeDirection;

// The class represents DeepCPU implementation of a long short term memory (LSTM) plus a Bahdanau Attention wraper.
// The equivilent python usage could be checked int the corresponding op test directory, attention_lstm_data_gen.py.
// The equivalent python usage could be checked int the corresponding op test directory, attention_lstm_data_gen.py.
// Also please note that detail implementation re-used lot of code from current ONNXRuntime LSTM operator, refactor
// is needed in future if this is become part of ONNX.
class DeepCpuAttnLstmOp final : public OpKernel {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ Status Sample(AllocatorPtr& allocator,
1,
generator,
*sampled_idx));
// TODO: update presense_mask()
// TODO: update presence_mask()
#ifdef DEBUG_GENERATION
dumper->Print("sampled_idx", *sampled_idx);
#endif
Expand Down
2 changes: 1 addition & 1 deletion onnxruntime/core/codegen/common/common.cc
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ std::unique_ptr<ComputeCapability> ToCapacity(const onnxruntime::GraphViewer& gr
ORT_THROW_IF_ERROR(node.ForEachWithIndex(node.ImplicitInputDefs(), process_input_fn));

// Handle outouts
// two cases are considerd as outputs
// two cases are considered as outputs
// 1. Output NodeArg is not used by any Node
// 2. Output NodeArg is used by at least one Node out of this subgraph.
// Note a NodeArg can be used by Nodes in and out of the subgraph at the same time.
Expand Down
2 changes: 1 addition & 1 deletion onnxruntime/core/codegen/mti/common.h
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

#define MTI_ASSERT(condition) \
if (!(condition)) { \
std::string error_msg = "Not satsified: " #condition \
std::string error_msg = "Not satisfied: " #condition \
": line " + \
std::to_string(__LINE__) + \
" in file " + std::string(__FILE__) + "\n"; \
Expand Down
4 changes: 2 additions & 2 deletions onnxruntime/core/codegen/passes/scheduler/schedule_utils.cc
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ bool ShouldTryVectorization(
// Check the schedule of tensor
// If it is not scheduled, try to vectorize it.
// Note TryVectorization has to use with compute_root.
// Therefore, there is a safty check of tensor's schedule
// Therefore, there is a safety check of tensor's schedule
bool TryVectorization(
const tvm::Tensor& tensor,
int64_t natural_vector_size,
Expand Down Expand Up @@ -124,7 +124,7 @@ bool TryVectorization(
// Check the schedule of tensor
// If it is not scheduled, try to add compute_inline on it.
// Note TryInlineSchedule cannot be used with compute_root.
// Therefore, there is a safty check of tensor's schedule.
// Therefore, there is a safety check of tensor's schedule.
bool TryInlineSchedule(
const tvm::Tensor& tensor,
ScheduleContext& ctx) {
Expand Down
4 changes: 2 additions & 2 deletions onnxruntime/core/codegen/passes/scheduler/schedule_utils.h
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ bool ShouldTryVectorization(
// Check the schedule of tensor
// If it is not scheduled, try to vectorize it.
// Note TryVectorization has to use with compute_root.
// Therefore, there is a safty check of tensor's schedule
// Therefore, there is a safety check of tensor's schedule
bool TryVectorization(
const tvm::Tensor& tensor,
int64_t natural_vector_size,
Expand All @@ -43,7 +43,7 @@ bool TryVectorization(
// Check the schedule of tensor
// If it is not scheduled, try to add compute_inline on it.
// Note TryInlineSchedule cannot be used with compute_root.
// Therefore, there is a safty check of tensor's schedule.
// Therefore, there is a safety check of tensor's schedule.
bool TryInlineSchedule(
const tvm::Tensor& tensor,
ScheduleContext& ctx);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ void TVMScheduleBuilder::DumpAllSchedulers() const {

d->ForEach([&stream](const std::string& key, Scheduler* op) {
stream << "Key " << key
<< ", Creater " << op->Name() << std::endl;
<< ", Creator " << op->Name() << std::endl;
});

++count;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ namespace tvm_codegen {

using CoordTransFunc = std::function<tvm::Array<tvm::Expr>(const tvm::Array<tvm::Expr>&)>;

// WeightLayout is data layout trasnformer for weight/initializer
// WeightLayout is data layout transformer for weight/initializer
class WeightLayout {
public:
// Static function to return unique string as a key
Expand Down
Loading

0 comments on commit 5b9369e

Please sign in to comment.