From 5a83a67f32da784b7dd4029bb23e7cc05363b748 Mon Sep 17 00:00:00 2001 From: Adrian Lizarraga Date: Fri, 25 Aug 2023 09:57:51 -0700 Subject: [PATCH] Support QDQ transformations with com.microsoft.Quantize/Dequantize ops (#17127) ### Description - Enables int32 support for com.microsoft.DequantizeLinear (contrib op) - Makes the `zero_point` input optional for Quantize/Dequantize contrib ops - Enables QDQ transformations with the Quantize/Dequantize contrib ops - Update tests: EnsureUniqueDQForNodeUnitTests, QDQTransformerTests, TransposeOptimizerTests ### Testing List of tested graph transformations: - [x] QDQSelectorActionTransformer - qdq_transformer_test.cc - [x] QDQS8ToU8Transformer - qdq_transformer_test.cc - [x] DoubleQDQPairsRemover - qdq_transformer_test.cc - [x] IdenticalChildrenConsolidation - qdq_transformer_test.cc - [x] QDQPropagation - qdq_transformer_test.cc - [x] QDQFinalCleanup - qdq_transformer_test.cc - [x] CliQuantFusion - qdq_transformer_test.cc - [x] ReluQuantFusion - qdq_transformer_test.cc - [x] EnsureUniqueDQForNodeUnit - ensure_unique_dq_for_node_unit_test.cc - [x] TransposeOptimizer - transpose_optimizer_test.cc - [x] CommonSubexpressionElimination - graph_transform_test.cc - [x] ConstantFolding - graph_transform_test.cc ### Motivation and Context We need to [support mixed 16-bit/8-bit precision QDQ models](https://github.com/microsoft/onnxruntime/pull/17015). This PR is the first step in achieving this goal: we need to make QDQ contrib ops work with our optimizations/transformations. --------- Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com> Co-authored-by: Scott McKay --- docs/ContribOperators.md | 20 +- docs/OperatorKernels.md | 2 +- .../contrib_ops/cpu/cpu_contrib_kernels.cc | 2 + .../cpu/quantization/quantize_ops.cc | 9 + .../graph/contrib_ops/quantization_defs.cc | 38 +- .../common_subexpression_elimination.cc | 3 +- .../core/optimizer/constant_folding.cc | 23 +- .../optimizer/double_qdq_pairs_remover.cc | 2 +- .../layout_transformation.cc | 2 +- .../qdq_transformer/clip_quantizelinear.cc | 3 +- .../ensure_unique_dq_for_node_unit.cc | 4 +- .../qdq_transformer/qdq_propagation.cc | 14 +- .../optimizer/qdq_transformer/qdq_util.cc | 6 +- .../qdq_transformer/relu_quantizelinear.cc | 3 +- .../onnx_transpose_optimization.cc | 62 +- .../onnx_transpose_optimization.h | 12 + .../ort_transpose_optimization.cc | 23 +- .../core/optimizer/transpose_optimizer.cc | 3 +- onnxruntime/core/optimizer/utils.cc | 13 +- .../test/contrib_ops/quantize_ops_test.cc | 11 + .../ensure_unique_dq_for_node_unit_test.cc | 115 +- .../test/optimizer/graph_transform_test.cc | 102 ++ .../optimizer/graph_transform_test_builder.h | 24 +- onnxruntime/test/optimizer/qdq_test_utils.cc | 35 +- onnxruntime/test/optimizer/qdq_test_utils.h | 177 ++- .../test/optimizer/qdq_transformer_test.cc | 1400 +++++++++++------ .../optimizer/transpose_optimizer_test.cc | 391 +++-- .../transform/convert_qdq_ops_to_ms_domain.py | 74 + ..._folding_dequantizelinear.qdq_contrib.onnx | Bin 0 -> 2786 bytes ...dq_node_unit.graph_output.qdq_contrib.onnx | Bin 0 -> 1994 bytes ...ant_folding_qdq_node_unit.qdq_contrib.onnx | Bin 0 -> 2058 bytes ...i_consumer_dq_nodes.fixed.qdq_contrib.onnx | Bin 0 -> 149186 bytes 32 files changed, 1656 insertions(+), 917 deletions(-) create mode 100644 onnxruntime/test/testdata/transform/convert_qdq_ops_to_ms_domain.py create mode 100644 onnxruntime/test/testdata/transform/fusion/constant_folding_dequantizelinear.qdq_contrib.onnx create mode 100644 onnxruntime/test/testdata/transform/fusion/constant_folding_qdq_node_unit.graph_output.qdq_contrib.onnx create mode 100644 onnxruntime/test/testdata/transform/fusion/constant_folding_qdq_node_unit.qdq_contrib.onnx create mode 100644 onnxruntime/test/testdata/transform/qdq_with_multi_consumer_dq_nodes.fixed.qdq_contrib.onnx diff --git a/docs/ContribOperators.md b/docs/ContribOperators.md index 63a7289dd91d8..5bd1a89c0dea1 100644 --- a/docs/ContribOperators.md +++ b/docs/ContribOperators.md @@ -1330,15 +1330,15 @@ This version of the operator has been available since version 1 of the 'com.micr
The axis along which same quantization parameters are applied. It's optional.If it's not specified, it means per-tensor quantization and input 'x_scale' and 'x_zero_point' must be scalars.If it's specified, it means per 'axis' quantization and input 'x_scale' and 'x_zero_point' must be 1-D tensors.
-#### Inputs +#### Inputs (2 - 3)
x : T1
N-D quantized Input tensor to be de-quantized.
x_scale : T2
-
Scale for input 'x'. It could be a scalar or a 1-D tensor, which means a per-tensor or per-axis quantization.If it's a 1-D tensor, its number of elements should be equal to the dimension value of 'axis' dimension of input 'x'.
-
x_zero_point : T1
-
Zero point for input 'x'. It could be a scalar or a 1-D tensor, which means a per-tensor or per-axis quantization.If it's a 1-D tensor, its number of elements should be equal to the dimension value of 'axis' dimension of input 'x'.
+
Scale for input 'x'. It can be a scalar, which means a per-tensor/layer dequantization, or a 1-D tensor for per-axis dequantization.
+
x_zero_point (optional) : T1
+
Zero point for input 'x'. Shape must match x_scale. It's optional. Zero point is 0 when it's not specified.
#### Outputs @@ -1351,8 +1351,8 @@ This version of the operator has been available since version 1 of the 'com.micr #### Type Constraints
-
T1 : tensor(int8), tensor(uint8)
-
Constrain 'x' and 'x_zero_point' to 8-bit integer tensors.
+
T1 : tensor(int8), tensor(uint8), tensor(int32)
+
Constrain 'x' and 'x_zero_point' to 8-bit integer tensors or 32-bit signed integer tensors.
T2 : tensor(float16), tensor(float)
Constrain 'y', 'x_scale' to float tensors.
@@ -4209,15 +4209,15 @@ This version of the operator has been available since version 1 of the 'com.micr
The axis along which same quantization parameters are applied. It's optional.If it's not specified, it means per-tensor quantization and input 'x_scale' and 'x_zero_point' must be scalars.If it's specified, it means per 'axis' quantization and input 'x_scale' and 'x_zero_point' must be 1-D tensors.
-#### Inputs +#### Inputs (2 - 3)
x : T1
N-D full precision Input tensor to be quantized.
y_scale : T1
-
Scale for doing quantization to get 'y'. It could be a scalar or a 1-D tensor,which means a per-tensor or per-axis quantization. If it's a 1-D tensor, its number of elements should be equal to the dimension value of 'axis' dimension of input 'x'.
-
y_zero_point : T2
-
Zero point for doing quantization to get 'y'. It could be a scalar or a 1-D tensor, which means a per-tensoror per-axis quantization. If it's a 1-D tensor, its number of elements should be equal to the dimension value of 'axis' dimension of input 'x'.
+
Scale for doing quantization to get 'y'. It can be a scalar, which means per-tensor/layer quantization, or a 1-D tensor for per-axis quantization.
+
y_zero_point (optional) : T2
+
Zero point for doing quantization to get 'y'. Shape must match y_scale. Default is uint8 with zero point of 0 if it's not specified.
#### Outputs diff --git a/docs/OperatorKernels.md b/docs/OperatorKernels.md index c76f760ef04bd..2e6f329363a50 100644 --- a/docs/OperatorKernels.md +++ b/docs/OperatorKernels.md @@ -439,7 +439,7 @@ Do not modify directly.* |CDist|*in* A:**T**
*in* B:**T**
*out* C:**T**|1+|**T** = tensor(double), tensor(float)| |ConvTransposeWithDynamicPads|*in* X:**T**
*in* W:**T**
*in* Pads:**tensor(int64)**
*in* B:**T**
*out* Y:**T**|1+|**T** = tensor(float)| |CropAndResize|*in* X:**T1**
*in* rois:**T1**
*in* batch_indices:**T2**
*in* crop_size:**T2**
*out* Y:**T1**|1+|**T1** = tensor(float)
**T2** = tensor(int32)| -|DequantizeLinear|*in* x:**T1**
*in* x_scale:**T2**
*in* x_zero_point:**T1**
*out* y:**T2**|1+|**T1** = tensor(int8), tensor(uint8)
**T2** = tensor(float)| +|DequantizeLinear|*in* x:**T1**
*in* x_scale:**T2**
*in* x_zero_point:**T1**
*out* y:**T2**|1+|**T1** = tensor(int32), tensor(int8), tensor(uint8)
**T2** = tensor(float)| |DynamicQuantizeLSTM|*in* X:**T**
*in* W:**T2**
*in* R:**T2**
*in* B:**T**
*in* sequence_lens:**T1**
*in* initial_h:**T**
*in* initial_c:**T**
*in* P:**T**
*in* W_scale:**T**
*in* W_zero_point:**T2**
*in* R_scale:**T**
*in* R_zero_point:**T2**
*out* Y:**T**
*out* Y_h:**T**
*out* Y_c:**T**|1+|**T** = tensor(float)
**T1** = tensor(int32)
**T2** = tensor(int8), tensor(uint8)| |DynamicQuantizeMatMul|*in* A:**T1**
*in* B:**T2**
*in* b_scale:**T1**
*in* b_zero_point:**T2**
*in* bias:**T1**
*out* Y:**T1**|1+|**T1** = tensor(float)
**T2** = tensor(int8), tensor(uint8)| |EmbedLayerNormalization|*in* input_ids:**T1**
*in* segment_ids:**T1**
*in* word_embedding:**T**
*in* position_embedding:**T**
*in* segment_embedding:**T**
*in* gamma:**T**
*in* beta:**T**
*in* mask:**T1**
*in* position_ids:**T1**
*out* output:**T**
*out* mask_index:**T1**
*out* embedding_sum:**T**|1+|**T** = tensor(float)| diff --git a/onnxruntime/contrib_ops/cpu/cpu_contrib_kernels.cc b/onnxruntime/contrib_ops/cpu/cpu_contrib_kernels.cc index d6d844e2451c7..660c8bd9e0624 100644 --- a/onnxruntime/contrib_ops/cpu/cpu_contrib_kernels.cc +++ b/onnxruntime/contrib_ops/cpu/cpu_contrib_kernels.cc @@ -56,6 +56,7 @@ class ONNX_OPERATOR_KERNEL_CLASS_NAME(kCpuExecutionProvider, kMSDomain, 1, QLine class ONNX_OPERATOR_KERNEL_CLASS_NAME(kCpuExecutionProvider, kMSDomain, 1, QLinearAveragePool); class ONNX_OPERATOR_TYPED_KERNEL_CLASS_NAME(kCpuExecutionProvider, kMSDomain, 1, uint8_t, DequantizeLinear); class ONNX_OPERATOR_TYPED_KERNEL_CLASS_NAME(kCpuExecutionProvider, kMSDomain, 1, int8_t, DequantizeLinear); +class ONNX_OPERATOR_TYPED_KERNEL_CLASS_NAME(kCpuExecutionProvider, kMSDomain, 1, int32_t, DequantizeLinear); class ONNX_OPERATOR_TYPED_KERNEL_CLASS_NAME(kCpuExecutionProvider, kMSDomain, 1, uint8_t, QuantizeLinear); class ONNX_OPERATOR_TYPED_KERNEL_CLASS_NAME(kCpuExecutionProvider, kMSDomain, 1, int8_t, QuantizeLinear); class ONNX_OPERATOR_TYPED_KERNEL_CLASS_NAME(kCpuExecutionProvider, kMSDomain, 1, uint8_t, QLinearLeakyRelu); @@ -190,6 +191,7 @@ Status RegisterQuantizationKernels(KernelRegistry& kernel_registry) { BuildKernelCreateInfo, BuildKernelCreateInfo, BuildKernelCreateInfo, + BuildKernelCreateInfo, BuildKernelCreateInfo, BuildKernelCreateInfo, BuildKernelCreateInfo, diff --git a/onnxruntime/contrib_ops/cpu/quantization/quantize_ops.cc b/onnxruntime/contrib_ops/cpu/quantization/quantize_ops.cc index 9d2931e9c98d0..28a304bfc7f0e 100644 --- a/onnxruntime/contrib_ops/cpu/quantization/quantize_ops.cc +++ b/onnxruntime/contrib_ops/cpu/quantization/quantize_ops.cc @@ -25,6 +25,15 @@ ONNX_CPU_OPERATOR_TYPED_MS_KERNEL( .TypeConstraint("T2", DataTypeImpl::GetTensorType()), DequantizeLinear); +ONNX_CPU_OPERATOR_TYPED_MS_KERNEL( + DequantizeLinear, + 1, + int32_t, + KernelDefBuilder() + .TypeConstraint("T1", DataTypeImpl::GetTensorType()) + .TypeConstraint("T2", DataTypeImpl::GetTensorType()), + DequantizeLinear); + ONNX_CPU_OPERATOR_TYPED_MS_KERNEL( QuantizeLinear, 1, diff --git a/onnxruntime/core/graph/contrib_ops/quantization_defs.cc b/onnxruntime/core/graph/contrib_ops/quantization_defs.cc index 375c5878d1151..aa2ad9f1ff6b1 100644 --- a/onnxruntime/core/graph/contrib_ops/quantization_defs.cc +++ b/onnxruntime/core/graph/contrib_ops/quantization_defs.cc @@ -152,23 +152,24 @@ ONNX_MS_OPERATOR_SET_SCHEMA( AttributeProto::INT, false) .Input(0, "x", "N-D full precision Input tensor to be quantized.", "T1") .Input(1, "y_scale", - "Scale for doing quantization to get 'y'. It could be a scalar or a 1-D tensor," - "which means a per-tensor or per-axis quantization. If it's a 1-D tensor, " - "its number of elements should be equal to the dimension value of 'axis' dimension of input 'x'.", + "Scale for doing quantization to get 'y'. It can be a scalar, which means per-tensor/layer " + "quantization, or a 1-D tensor for per-axis quantization.", "T1") .Input(2, "y_zero_point", - "Zero point for doing quantization to get 'y'. It could be a scalar or a 1-D tensor, which means a " - "per-tensor" - "or per-axis quantization. If it's a 1-D tensor, its number of elements should be equal to the " - "dimension value of 'axis' dimension of input 'x'.", - "T2") + "Zero point for doing quantization to get 'y'. Shape must match y_scale. Default is " + "uint8 with zero point of 0 if it's not specified.", + "T2", OpSchema::Optional) .Output(0, "y", "N-D quantized output tensor. It has same shape as input 'x'.", "T2") .TypeConstraint("T1", {"tensor(float16)", "tensor(float)"}, "Constrain 'x', 'y_scale' to float tensors.") .TypeConstraint("T2", {"tensor(int8)", "tensor(uint8)"}, "Constrain 'y_zero_point' and 'y' to 8-bit integer tensors.") .SetDoc(QuantizeLinear_ver1_doc) .TypeAndShapeInferenceFunction([](ONNX_NAMESPACE::InferenceContext& ctx) { - propagateElemTypeFromInputToOutput(ctx, 2, 0); + if (ctx.getNumInputs() == 3 && ctx.getInputType(2) != nullptr) { + propagateElemTypeFromInputToOutput(ctx, 2, 0); + } else { + updateOutputElemType(ctx, 0, ONNX_NAMESPACE::TensorProto::UINT8); + } if (!hasInputShape(ctx, 0)) return; @@ -192,21 +193,18 @@ ONNX_MS_OPERATOR_SET_SCHEMA(DequantizeLinear, 1, AttributeProto::INT, false) .Input(0, "x", "N-D quantized Input tensor to be de-quantized.", "T1") .Input(1, "x_scale", - "Scale for input 'x'. It could be a scalar or a 1-D tensor, which means a " - "per-tensor or per-axis quantization." - "If it's a 1-D tensor, its number of elements should be equal to the dimension " - "value of 'axis' dimension of input 'x'.", + "Scale for input 'x'. It can be a scalar, which means a per-tensor/layer " + "dequantization, or a 1-D tensor for per-axis dequantization.", "T2") .Input(2, "x_zero_point", - "Zero point for input 'x'. It could be a scalar or a 1-D tensor, which means a " - "per-tensor or per-axis quantization." - "If it's a 1-D tensor, its number of elements should be equal to the dimension " - "value of 'axis' dimension of input 'x'.", - "T1") + "Zero point for input 'x'. Shape must match x_scale. It's optional. " + "Zero point is 0 when it's not specified.", + "T1", OpSchema::Optional) .Output(0, "y", "N-D full precision output tensor. It has same shape as input 'x'.", "T2") - .TypeConstraint("T1", {"tensor(int8)", "tensor(uint8)"}, - "Constrain 'x' and 'x_zero_point' to 8-bit integer tensors.") + .TypeConstraint("T1", {"tensor(int8)", "tensor(uint8)", "tensor(int32)"}, + "Constrain 'x' and 'x_zero_point' to 8-bit integer tensors or 32-bit " + "signed integer tensors.") .TypeConstraint("T2", {"tensor(float16)", "tensor(float)"}, "Constrain 'y', 'x_scale' to float tensors.") .SetDoc(DequantizeLinear_ver1_doc) diff --git a/onnxruntime/core/optimizer/common_subexpression_elimination.cc b/onnxruntime/core/optimizer/common_subexpression_elimination.cc index e94d8179e758d..b2e7ef0b4f558 100644 --- a/onnxruntime/core/optimizer/common_subexpression_elimination.cc +++ b/onnxruntime/core/optimizer/common_subexpression_elimination.cc @@ -324,7 +324,8 @@ bool IsNodeSupported(const Node& node) { // would result in it having multiple consumers for its output, and it being used in multiple QDQ node groups. return !node.ContainsSubgraph() && optimizer_utils::IsOperationDeterministic(node.Domain(), node.OpType()) && - !(node.Domain() == kOnnxDomain && node.OpType() == "DequantizeLinear"); + !(node.Domain() == kOnnxDomain && node.OpType() == "DequantizeLinear") && + !(node.Domain() == kMSDomain && node.OpType() == "DequantizeLinear"); } } // namespace diff --git a/onnxruntime/core/optimizer/constant_folding.cc b/onnxruntime/core/optimizer/constant_folding.cc index 80e2bbedef974..eb130785add1c 100644 --- a/onnxruntime/core/optimizer/constant_folding.cc +++ b/onnxruntime/core/optimizer/constant_folding.cc @@ -123,17 +123,6 @@ Status ConstantFolding::ApplyImpl(Graph& graph, bool& modified, int graph_level, } else { InitializedTensorSet constant_inputs; - // we currently constant fold using the CPU EP only. - // if the node is assigned to a different EP we can run it if it's an ONNX op as we have CPU based - // implementations for all ONNX ops. If the node/op is from a different op domain or if the CPU implementation - // does not support the specific input type(s) required by the node (currently we only support a subset of - // types in some CPU kernels) then we can't proceed with constant folding for the node. - auto ep_type = node->GetExecutionProviderType(); - bool cpu_ep = ep_type == kCpuExecutionProvider; - if (!cpu_ep && node->Domain() != kOnnxDomain) { - continue; - } - // Check if constant folding can be applied on this node. const auto can_constant_fold_node = [&](const Node& n, bool skip_inputs_constant_check = false) { return graph_utils::IsSupportedProvider(n, GetCompatibleExecutionProviders()) && @@ -196,18 +185,26 @@ Status ConstantFolding::ApplyImpl(Graph& graph, bool& modified, int graph_level, fetch_mlvalue_idxs.push_back(info.GetMLValueIndex(node_out->Name())); } + auto& ep_type = node->GetExecutionProviderType(); + const bool node_on_cpu_ep = ep_type == kCpuExecutionProvider; + // override the EP assigned to the node so that it will use the CPU kernel for Compute. - if (!cpu_ep) { + if (!node_on_cpu_ep) { node->SetExecutionProviderType(kCpuExecutionProvider); } auto kernel = info.CreateKernel(node); // undo the EP change to the value that was assigned at graph partitioning time - if (!cpu_ep) { + if (!node_on_cpu_ep) { node->SetExecutionProviderType(ep_type); } + // We currently constant fold using the CPU EP only. + // If we can't find a CPU kernel for this node, then we can't proceed with constant folding. + // + // TODO(adrianlizarraga): Support constant folding with other execution providers. For example, we may be able + // to use a CUDA kernel to constant fold operators with data types not supported by the CPU EP kernel. if (kernel == nullptr) { LOGS(logger, WARNING) << "Could not find a CPU kernel and hence " << "can't constant fold " << node->OpType() << " node '" << node->Name() << "'"; diff --git a/onnxruntime/core/optimizer/double_qdq_pairs_remover.cc b/onnxruntime/core/optimizer/double_qdq_pairs_remover.cc index 4e4bc7957f186..b67f6d6ec0794 100644 --- a/onnxruntime/core/optimizer/double_qdq_pairs_remover.cc +++ b/onnxruntime/core/optimizer/double_qdq_pairs_remover.cc @@ -51,7 +51,7 @@ bool DoubleQDQPairsRemover::IsNodeRemovable( } // Type is either "tensor(uint8)" or "tensor(int8)" - const auto self_zp_type = *self->InputDefs()[InputIndex::ZERO_POINT_ID]->Type(); + const auto& self_zp_type = *self->InputDefs()[InputIndex::ZERO_POINT_ID]->Type(); // child should be a Q, and have only one child, have the same type as self, and cannot be a graph output child_index = self->OutputEdgesBegin()->GetNode().Index(); const Node* child = graph.GetNode(child_index); diff --git a/onnxruntime/core/optimizer/layout_transformation/layout_transformation.cc b/onnxruntime/core/optimizer/layout_transformation/layout_transformation.cc index 1ff0872dd5327..2d12c407e6e31 100644 --- a/onnxruntime/core/optimizer/layout_transformation/layout_transformation.cc +++ b/onnxruntime/core/optimizer/layout_transformation/layout_transformation.cc @@ -150,7 +150,7 @@ Status TransformLayoutForEP(Graph& graph, bool& modified, const IExecutionProvid const auto max_node_idx = graph.MaxNodeIndex(); OptimizeResult result = onnx_transpose_optimization::Optimize(*api_graph, execution_provider.Type(), - PostLayoutTransformCostCheck); + PostLayoutTransformCostCheck, OrtExtendedHandlers()); if (result.error_msg) { return ORT_MAKE_STATUS(ONNXRUNTIME, FAIL, "Layout/Transpose optimization for ", execution_provider.Type(), diff --git a/onnxruntime/core/optimizer/qdq_transformer/clip_quantizelinear.cc b/onnxruntime/core/optimizer/qdq_transformer/clip_quantizelinear.cc index 814d297dbcf04..a0942c31b0161 100644 --- a/onnxruntime/core/optimizer/qdq_transformer/clip_quantizelinear.cc +++ b/onnxruntime/core/optimizer/qdq_transformer/clip_quantizelinear.cc @@ -3,6 +3,7 @@ #include "core/optimizer/initializer.h" #include "core/optimizer/qdq_transformer/clip_quantizelinear.h" +#include "core/optimizer/qdq_transformer/qdq_util.h" #include "core/optimizer/utils.h" #include "core/graph/graph_utils.h" @@ -73,7 +74,7 @@ bool ClipQuantFusion::SatisfyCondition(const Graph& graph, const Node& node, con // if Clip is followed by QuantizeLinear, it can be fused into QuantizeLinear potentially const auto& next_node = *node.OutputNodesBegin(); - if (!graph_utils::IsSupportedOptypeVersionAndDomain(next_node, "QuantizeLinear", {10, 13, 19})) { + if (!QDQ::MatchQNode(next_node)) { return false; } diff --git a/onnxruntime/core/optimizer/qdq_transformer/ensure_unique_dq_for_node_unit.cc b/onnxruntime/core/optimizer/qdq_transformer/ensure_unique_dq_for_node_unit.cc index e50efd8aa199c..cc0f7854791d4 100644 --- a/onnxruntime/core/optimizer/qdq_transformer/ensure_unique_dq_for_node_unit.cc +++ b/onnxruntime/core/optimizer/qdq_transformer/ensure_unique_dq_for_node_unit.cc @@ -52,7 +52,9 @@ Status DuplicateDQForOutputEdge(const graph_utils::GraphEdge& original_dq_output QDQ::DQOpName, MakeString("Added by ", kTransformerName), dq_inputs, - {&new_dq_output_nodearg}); + {&new_dq_output_nodearg}, + nullptr, // attributes + original_dq_node.Domain()); // set up edges // remove DQ -> Y diff --git a/onnxruntime/core/optimizer/qdq_transformer/qdq_propagation.cc b/onnxruntime/core/optimizer/qdq_transformer/qdq_propagation.cc index 1bcd14eb4ab10..f0e76312d6e00 100644 --- a/onnxruntime/core/optimizer/qdq_transformer/qdq_propagation.cc +++ b/onnxruntime/core/optimizer/qdq_transformer/qdq_propagation.cc @@ -31,7 +31,7 @@ bool CanNodePropagate(const Node& node) { // 2. scale_initializer_nodearg and zp_initializer_nodearg_ptr (if not null) are constant initializers Status InsertQDQPair(Graph& graph, const ExtendedGraphEdge& insertion_edge, NodeArg& scale_initializer_nodearg, NodeArg* zp_initializer_nodearg_ptr, - const logging::Logger& logger) { + const std::string& qdq_domain, const logging::Logger& logger) { auto* src_node = insertion_edge.GetMutableNodeAtEnd(graph, ExtendedGraphEdge::End::Source); auto* dst_node = insertion_edge.GetMutableNodeAtEnd(graph, ExtendedGraphEdge::End::Destination); @@ -75,7 +75,9 @@ Status InsertQDQPair(Graph& graph, const ExtendedGraphEdge& insertion_edge, make_q_or_dq_inputs(pre_q_nodearg, scale_initializer_nodearg, zp_initializer_nodearg_ptr), // outputs - {&q_to_dq_nodearg}); + {&q_to_dq_nodearg}, + nullptr, // attributes + qdq_domain); ORT_RETURN_IF_NOT(graph.SetOpSchemaFromRegistryForNode(q_node), "Failed to set op schema for added Q node."); @@ -86,7 +88,9 @@ Status InsertQDQPair(Graph& graph, const ExtendedGraphEdge& insertion_edge, make_q_or_dq_inputs(q_to_dq_nodearg, scale_initializer_nodearg, zp_initializer_nodearg_ptr), // outputs - {&post_dq_nodearg}); + {&post_dq_nodearg}, + nullptr, // attributes + qdq_domain); ORT_RETURN_IF_NOT(graph.SetOpSchemaFromRegistryForNode(dq_node), "Failed to set op schema for added DQ node."); @@ -237,7 +241,7 @@ Status PropagateDQForward(Graph& graph, gsl::span node_indices, break; } - ORT_RETURN_IF_ERROR(InsertQDQPair(graph, *curr_edge, dq_scale, dq_zero_point, logger)); + ORT_RETURN_IF_ERROR(InsertQDQPair(graph, *curr_edge, dq_scale, dq_zero_point, dq_node.Domain(), logger)); modified = true; } } @@ -286,7 +290,7 @@ Status PropagateQBackward(Graph& graph, gsl::span node_indices, break; } - ORT_RETURN_IF_ERROR(InsertQDQPair(graph, *curr_edge, q_scale, q_zero_point, logger)); + ORT_RETURN_IF_ERROR(InsertQDQPair(graph, *curr_edge, q_scale, q_zero_point, q_node.Domain(), logger)); modified = true; } } diff --git a/onnxruntime/core/optimizer/qdq_transformer/qdq_util.cc b/onnxruntime/core/optimizer/qdq_transformer/qdq_util.cc index e72bd705ee6a5..221c06d7c8dcf 100644 --- a/onnxruntime/core/optimizer/qdq_transformer/qdq_util.cc +++ b/onnxruntime/core/optimizer/qdq_transformer/qdq_util.cc @@ -102,11 +102,13 @@ bool QOrDQNodeHasConstantScalarScaleAndZeroPoint( #if !defined(ORT_MINIMAL_BUILD) || defined(ORT_EXTENDED_MINIMAL_BUILD) bool MatchQNode(const Node& node) { - return graph_utils::IsSupportedOptypeVersionAndDomain(node, QOpName, {10, 13, 19}); + return graph_utils::IsSupportedOptypeVersionAndDomain(node, QOpName, {10, 13, 19}) || + graph_utils::IsSupportedOptypeVersionAndDomain(node, QOpName, {1}, kMSDomain); } bool MatchDQNode(const Node& node) { - return graph_utils::IsSupportedOptypeVersionAndDomain(node, DQOpName, {10, 13, 19}); + return graph_utils::IsSupportedOptypeVersionAndDomain(node, DQOpName, {10, 13, 19}) || + graph_utils::IsSupportedOptypeVersionAndDomain(node, DQOpName, {1}, kMSDomain); } #endif // !defined(ORT_MINIMAL_BUILD) || defined(ORT_EXTENDED_MINIMAL_BUILD) diff --git a/onnxruntime/core/optimizer/qdq_transformer/relu_quantizelinear.cc b/onnxruntime/core/optimizer/qdq_transformer/relu_quantizelinear.cc index 3c41b3849d6d5..3a8f2db62302d 100644 --- a/onnxruntime/core/optimizer/qdq_transformer/relu_quantizelinear.cc +++ b/onnxruntime/core/optimizer/qdq_transformer/relu_quantizelinear.cc @@ -3,6 +3,7 @@ #include "core/optimizer/initializer.h" #include "core/optimizer/qdq_transformer/relu_quantizelinear.h" +#include "core/optimizer/qdq_transformer/qdq_util.h" #include "core/optimizer/utils.h" #include "core/graph/graph_utils.h" @@ -18,7 +19,7 @@ bool ReluQuantFusion::SatisfyCondition(const Graph& graph, const Node& node, con // if Relu is followed by QuantizeLinear, it can be fused into QuantizeLinear potentially const auto& next_node = *node.OutputNodesBegin(); - if (!graph_utils::IsSupportedOptypeVersionAndDomain(next_node, "QuantizeLinear", {10, 13, 19})) { + if (!QDQ::MatchQNode(next_node)) { return false; } diff --git a/onnxruntime/core/optimizer/transpose_optimization/onnx_transpose_optimization.cc b/onnxruntime/core/optimizer/transpose_optimization/onnx_transpose_optimization.cc index af4859fdbb041..a6fa5ce3581d0 100644 --- a/onnxruntime/core/optimizer/transpose_optimization/onnx_transpose_optimization.cc +++ b/onnxruntime/core/optimizer/transpose_optimization/onnx_transpose_optimization.cc @@ -7,6 +7,7 @@ #include #include #include +#include #include "core/common/gsl.h" #include "core/common/make_string.h" @@ -1172,29 +1173,37 @@ static bool HandleUnsqueeze(HandlerArgs& args) { constexpr HandlerInfo unsqueeze_handler = {&FirstInput, &HandleUnsqueeze}; -static bool HandleQuantizeDequantizeScale(const api::GraphRef& graph, const std::vector& perm, - api::NodeRef& node, int64_t opset) { - if (opset >= 13) { - size_t rank = perm.size(); - // Update axis in Opset >= 13 if scale/zero_point are non-scalar - auto inputs = node.Inputs(); +bool TransposeQuantizeDequantizeAxis(const api::GraphRef& graph, const std::vector& perm, api::NodeRef& node) { + size_t rank = perm.size(); + + // Update axis if scale/zero_point are non-scalar + auto inputs = node.Inputs(); - auto inp_shape = graph.GetValueInfo(inputs[1])->Shape(); - bool scalar_params = inp_shape.has_value() && inp_shape->size() == 0; + auto inp_shape = graph.GetValueInfo(inputs[1])->Shape(); + bool scalar_params = inp_shape.has_value() && inp_shape->size() == 0; - if (!scalar_params) { - int64_t axis = node.GetAttributeIntDefault("axis", 1); - if (!NormalizeAndValidateAxis(axis, rank)) { - return false; - } - node.SetAttributeInt("axis", perm[gsl::narrow_cast(axis)]); + if (!scalar_params) { + int64_t axis = node.GetAttributeIntDefault("axis", 1); + if (!NormalizeAndValidateAxis(axis, rank)) { + return false; } + node.SetAttributeInt("axis", perm[gsl::narrow_cast(axis)]); } return true; } +constexpr bool HandleQuantizeDequantizeAxis(const api::GraphRef& graph, const std::vector& perm, + api::NodeRef& node, int64_t opset) { + if (opset < 13) { + // no `axis` value until opset 13 + return true; + } + + return TransposeQuantizeDequantizeAxis(graph, perm, node); +} + static bool HandleQuantizeDequantizeLinear(HandlerArgs& args) { - if (!HandleQuantizeDequantizeScale(args.ctx.graph, args.perm, args.node, args.ctx.opset)) { + if (!HandleQuantizeDequantizeAxis(args.ctx.graph, args.perm, args.node, args.ctx.opset)) { return false; } @@ -1740,17 +1749,23 @@ static const std::unordered_map handler_ma {"Reshape", reshape_handler}, }; +constexpr bool IsOnnxDomain(std::string_view domain) { + return (domain == onnxruntime::kOnnxDomain) || (domain == onnxruntime::kOnnxDomainAlias); +} + +constexpr bool IsMSDomain(std::string_view domain) { + return domain == onnxruntime::kMSDomain; +} + static const HandlerInfo* GetHandler(api::NodeRef& node, const HandlerMap& extended_handlers) { std::string key; auto domain = node.Domain(); auto op_type = node.OpType(); - if (domain == onnxruntime::kOnnxDomain || domain == onnxruntime::kOnnxDomainAlias) { + if (IsOnnxDomain(domain)) { key = std::string(op_type); - } else if (domain == onnxruntime::kMSDomain) { - key = onnxruntime::MakeString(domain, ".", op_type); } else { - return nullptr; + key = onnxruntime::MakeString(domain, ".", op_type); } // extended map is higher priority @@ -2045,7 +2060,14 @@ OptimizeResult OptimizeImpl(OptimizerCtx& ctx) { // we're moving the Transpose to before the DQ, so we need to use the inverse permutations to update the axis // attribute correctly when doing per-axis dequantization - if (!HandleQuantizeDequantizeScale(ctx.graph, InvertPerm(*perm), *dq_node, ctx.opset)) { + std::string_view dq_domain = dq_node->Domain(); + std::vector perm_inv = InvertPerm(*perm); + + if (IsOnnxDomain(dq_domain) && !HandleQuantizeDequantizeAxis(ctx.graph, perm_inv, *dq_node, ctx.opset)) { + continue; + } + + if (IsMSDomain(dq_domain) && !TransposeQuantizeDequantizeAxis(ctx.graph, perm_inv, *dq_node)) { continue; } diff --git a/onnxruntime/core/optimizer/transpose_optimization/onnx_transpose_optimization.h b/onnxruntime/core/optimizer/transpose_optimization/onnx_transpose_optimization.h index 131ff6c6ef0c6..1a54e7834a4ae 100644 --- a/onnxruntime/core/optimizer/transpose_optimization/onnx_transpose_optimization.h +++ b/onnxruntime/core/optimizer/transpose_optimization/onnx_transpose_optimization.h @@ -3,6 +3,8 @@ #pragma once +#include + // implementation details of the transpose optimizer API defined in optimizer_api.h. // This exposes some internals so they can be extended as needed. #include "optimizer_api.h" @@ -106,4 +108,14 @@ std::vector ChannelFirstToLastPerm(size_t rank); /// Rank of the tensor /// perm attribute to transpose from channel last to channel first. Ex: [0, 3, 1, 2] std::vector ChannelLastToFirstPerm(size_t rank); + +/// +/// Updates the axis attribute of QuantizeLinear or DequantizeLinear operators according to the +/// provided transposition permutation. Only applies to per-axis (de)quantization. +/// +/// The graph containing the node +/// The transpose permutation +/// The QuantizeLinear or DequantizeLinear node +/// True if the axis attribute remains valid +bool TransposeQuantizeDequantizeAxis(const api::GraphRef& graph, const std::vector& perm, api::NodeRef& node); } // namespace onnx_transpose_optimization diff --git a/onnxruntime/core/optimizer/transpose_optimization/ort_transpose_optimization.cc b/onnxruntime/core/optimizer/transpose_optimization/ort_transpose_optimization.cc index 8378d7b22e537..ead82a6b56741 100644 --- a/onnxruntime/core/optimizer/transpose_optimization/ort_transpose_optimization.cc +++ b/onnxruntime/core/optimizer/transpose_optimization/ort_transpose_optimization.cc @@ -57,6 +57,11 @@ static bool HandleQLinearPoolOp(HandlerArgs& args) { constexpr HandlerInfo q_linear_pool_op_handler = {&FirstInput, &HandleQLinearPoolOp}; static bool HandleMaxPool(HandlerArgs& args) { +#if defined(DISABLE_CONTRIB_OPS) + // Cannot convert MaxPool to com.microsoft.NhwcMaxPool if contrib ops are disabled in this build. + ORT_UNUSED_PARAMETER(args); + return false; +#else if (args.node.GetExecutionProviderType() != "CPUExecutionProvider") { return false; } @@ -78,22 +83,38 @@ static bool HandleMaxPool(HandlerArgs& args) { return false; } - auto new_node = SwapNodeOpTypeDomainAndSinceVersion(args.ctx.graph, args.node, "NhwcMaxPool", "com.microsoft", 1); + auto new_node = SwapNodeOpTypeDomainAndSinceVersion(args.ctx.graph, args.node, "NhwcMaxPool", kMSDomain, 1); new_node->ClearAttribute("storage_order"); // Only relevant for indices output. Prohibited for NhwcMaxPool. TransposeFirstInput(args.ctx, *new_node, args.perm_inv); TransposeOutputs(args.ctx, *new_node, args.perm); + return true; +#endif // defined(DISABLE_CONTRIB_OPS) +} + +static bool HandleContribQuantizeDequantizeLinear(HandlerArgs& args) { + if (!TransposeQuantizeDequantizeAxis(args.ctx.graph, args.perm, args.node)) { + return false; + } + + TransposeFirstInput(args.ctx, args.node, args.perm_inv); + TransposeOutputs(args.ctx, args.node, args.perm); + return true; } constexpr HandlerInfo max_pool_op_handler = {&FirstInput, &HandleMaxPool}; constexpr HandlerInfo node_1_inp_handler = {&FirstInput, &HandleSimpleNode}; constexpr HandlerInfo reduce_op_handler = {&FirstInput, &HandleReduceOps}; +constexpr HandlerInfo contrib_quantize_dequantize_linear_handler = {&FirstInput, + &HandleContribQuantizeDequantizeLinear}; // ORT contrib ops and special cased ONNX ops where we have EP specific handling const HandlerMap& OrtExtendedHandlers() { static const HandlerMap extended_handler_map = []() { HandlerMap map = { {"MaxPool", max_pool_op_handler}, + {"com.microsoft.QuantizeLinear", contrib_quantize_dequantize_linear_handler}, + {"com.microsoft.DequantizeLinear", contrib_quantize_dequantize_linear_handler}, {"com.microsoft.QLinearAdd", q_linear_binary_op_handler}, {"com.microsoft.QLinearAveragePool", q_linear_pool_op_handler}, {"com.microsoft.QLinearConcat", q_linear_concat_handler}, diff --git a/onnxruntime/core/optimizer/transpose_optimizer.cc b/onnxruntime/core/optimizer/transpose_optimizer.cc index 5f17dc14657dd..33e3f5eeaf0fa 100644 --- a/onnxruntime/core/optimizer/transpose_optimizer.cc +++ b/onnxruntime/core/optimizer/transpose_optimizer.cc @@ -20,7 +20,8 @@ Status TransposeOptimizer::ApplyImpl(Graph& graph, bool& modified, int graph_lev const logging::Logger& logger) const { auto api_graph = MakeApiGraph(graph, cpu_allocator_, /*new_node_ep*/ nullptr); - OptimizeResult result = onnx_transpose_optimization::Optimize(*api_graph, "", /* default cost check*/ nullptr); + OptimizeResult result = onnx_transpose_optimization::Optimize(*api_graph, "", /* default cost check*/ nullptr, + OrtExtendedHandlers()); if (result.error_msg) { // currently onnx_layout_transformation::Optimize only fails if we hit an unsupported opset. diff --git a/onnxruntime/core/optimizer/utils.cc b/onnxruntime/core/optimizer/utils.cc index fd562f90b4310..7c3599a08ec7a 100644 --- a/onnxruntime/core/optimizer/utils.cc +++ b/onnxruntime/core/optimizer/utils.cc @@ -274,8 +274,15 @@ int32_t IndexOfNodeOutput(const Node& node, const NodeArg& node_arg) { constexpr std::array kOnnxDomainNonDeterministicOps{"RandomUniform", "RandomNormal", "RandomUniformLike", "RandomNormalLike", "Multinomial"}; +// List of deterministic MS domain operators. Currently used for constant folding and common subexpression elimination. +// +// TODO(adrianlizarraga): Investigate converting to lists of *non-deterministic* MS domain operators to be consistent +// with the above ONNX list. With the current approach, only MS domain Q/DQ operators +// (plus ShrunkenGather for training) are considered deterministic. #ifdef ENABLE_TRAINING_OPS -constexpr std::array kMSDomainDeterministicOps{"ShrunkenGather"}; +constexpr std::array kMSDomainDeterministicOps{"ShrunkenGather", "QuantizeLinear", "DequantizeLinear"}; +#else +constexpr std::array kMSDomainDeterministicOps{"QuantizeLinear", "DequantizeLinear"}; #endif bool IsOperationDeterministic(const std::string& domain, const std::string& op) { @@ -283,12 +290,12 @@ bool IsOperationDeterministic(const std::string& domain, const std::string& op) auto iter = std::find(kOnnxDomainNonDeterministicOps.begin(), kOnnxDomainNonDeterministicOps.end(), op); return iter == kOnnxDomainNonDeterministicOps.end(); } -#ifdef ENABLE_TRAINING_OPS + if (domain.compare(kMSDomain) == 0) { auto iter = std::find(kMSDomainDeterministicOps.begin(), kMSDomainDeterministicOps.end(), op); return iter != kMSDomainDeterministicOps.end(); } -#endif + // Unknown domain. Assume the op is not deterministic. return false; } diff --git a/onnxruntime/test/contrib_ops/quantize_ops_test.cc b/onnxruntime/test/contrib_ops/quantize_ops_test.cc index c761d9a6a0001..af29f972a64cf 100644 --- a/onnxruntime/test/contrib_ops/quantize_ops_test.cc +++ b/onnxruntime/test/contrib_ops/quantize_ops_test.cc @@ -40,6 +40,17 @@ TEST(DequantizeLinearOpTest, DequantizeLinear_per_tensor_float_int8) { test.Run(OpTester::ExpectResult::kExpectSuccess, "", {kTensorrtExecutionProvider}); } +// Scalar zero & scale with int32 +TEST(DequantizeLinearOpTest, DequantizeLinear_per_tensor_float_int32_cpu) { + OpTester test("DequantizeLinear", 1, onnxruntime::kMSDomain); + std::vector dims{4}; + test.AddInput("x", dims, {-300, -30, -1025, 1270}); + test.AddInput("scale", {}, {2.0f}, true); + test.AddInput("zero_point", {}, {0}, true); + test.AddOutput("y", dims, {-600.0f, -60.0f, -2050.0f, 2540.0f}); + test.Run(); +} + #ifdef USE_CUDA TEST(DequantizeLinearOpTest, DequantizeLinear_per_tensor_half_uint8) { OpTester test("DequantizeLinear", 1, onnxruntime::kMSDomain); diff --git a/onnxruntime/test/optimizer/ensure_unique_dq_for_node_unit_test.cc b/onnxruntime/test/optimizer/ensure_unique_dq_for_node_unit_test.cc index 86fae32c12d0c..d0ce4898a472c 100644 --- a/onnxruntime/test/optimizer/ensure_unique_dq_for_node_unit_test.cc +++ b/onnxruntime/test/optimizer/ensure_unique_dq_for_node_unit_test.cc @@ -20,15 +20,15 @@ struct GraphConfig { bool has_subgraph_consumer{false}; }; -auto GetGraphBuilder(GraphConfig config) { - return [=](ModelTestBuilder& builder) { +auto GetGraphBuilder(const GraphConfig& config, bool use_ms_domain_qdq_ops) { + return [config, use_ms_domain_qdq_ops](ModelTestBuilder& builder) { const auto input_shape = std::vector{1, 2, 4}; constexpr float scale = 0.5f; constexpr uint8_t zero_point = 0; auto* dq_input = builder.MakeInput(input_shape, uint8_t{0}, uint8_t{255}); auto* dq_output = config.has_graph_output ? builder.MakeOutput() : builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(dq_input, scale, zero_point, dq_output); + builder.AddDequantizeLinearNode(dq_input, scale, zero_point, dq_output, use_ms_domain_qdq_ops); for (size_t i = 0; i < config.num_explicit_consumer_nodes; ++i) { // use Concat for the explicit consumer node as it supports a variadic number of inputs @@ -70,48 +70,57 @@ auto GetGraphBuilder(GraphConfig config) { }; } -void RunEnsureUniqueDQForNodeUnitTest(std::function graph_builder_fn, - int expected_dq_count) { - constexpr int opset_version = 12; - - { - SCOPED_TRACE("test with standalone transformer"); - - auto post_transform_check_fn = [expected_dq_count](const Graph& graph) { - const auto op_counts = CountOpsInGraph(graph); - const auto actual_dq_count = OpCount(op_counts, "DequantizeLinear"); - ORT_RETURN_IF_NOT(actual_dq_count == expected_dq_count, - "Expected DQ count: ", expected_dq_count, ", actual: ", actual_dq_count); - return Status::OK(); - }; - - EXPECT_STATUS_OK(TestGraphTransformer( - graph_builder_fn, - opset_version, - DefaultLoggingManager().DefaultLogger(), - std::make_unique(), - TransformerLevel::Level1, - 5, - {}, - post_transform_check_fn)); - } - - { - SCOPED_TRACE("test with basic transformers"); - - auto post_transform_check_fn = [expected_dq_count](const InferenceSessionWrapper& session) { - const auto& graph = session.GetGraph(); - const auto op_counts = CountOpsInGraph(graph); - ASSERT_EQ(OpCount(op_counts, "DequantizeLinear"), expected_dq_count); - }; - - TransformerTester( - graph_builder_fn, - post_transform_check_fn, - TransformerLevel::Default, - TransformerLevel::Level1, - opset_version); - } +void RunEnsureUniqueDQForNodeUnitTest(const GraphConfig& config, int expected_dq_count) { + auto run_tests = [config, expected_dq_count](bool use_ms_domain_qdq_ops) { + constexpr int opset_version = 12; + const char* dequantize_linear_key = use_ms_domain_qdq_ops ? "com.microsoft.DequantizeLinear" : "DequantizeLinear"; + std::function graph_builder_fn = GetGraphBuilder(config, use_ms_domain_qdq_ops); + + { + SCOPED_TRACE("test with standalone transformer"); + + auto post_transform_check_fn = [expected_dq_count, dequantize_linear_key](const Graph& graph) { + const auto op_counts = CountOpsInGraph(graph); + const auto actual_dq_count = OpCount(op_counts, dequantize_linear_key); + ORT_RETURN_IF_NOT(actual_dq_count == expected_dq_count, + "Expected DQ count: ", expected_dq_count, ", actual: ", actual_dq_count); + return Status::OK(); + }; + + EXPECT_STATUS_OK(TestGraphTransformer( + graph_builder_fn, + opset_version, + DefaultLoggingManager().DefaultLogger(), + std::make_unique(), + TransformerLevel::Level1, + 5, + {}, + post_transform_check_fn)); + } + + { + SCOPED_TRACE("test with basic transformers"); + + auto post_transform_check_fn = [expected_dq_count, + dequantize_linear_key](const InferenceSessionWrapper& session) { + const auto& graph = session.GetGraph(); + const auto op_counts = CountOpsInGraph(graph); + ASSERT_EQ(OpCount(op_counts, dequantize_linear_key), expected_dq_count); + }; + + TransformerTester( + graph_builder_fn, + post_transform_check_fn, + TransformerLevel::Default, + TransformerLevel::Level1, + opset_version); + } + }; + + run_tests(false); +#if !defined(DISABLE_CONTRIB_OPS) + run_tests(true); // Use contrib QDQ ops. +#endif } } // namespace @@ -122,7 +131,7 @@ TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodes) { config.num_inputs_per_explicit_consumer_node = 1; // expected count = one for each explicit consumer node (3), reusing the original one = 3 - RunEnsureUniqueDQForNodeUnitTest(GetGraphBuilder(config), 3); + RunEnsureUniqueDQForNodeUnitTest(config, 3); } TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodesWithGraphOutput) { @@ -132,7 +141,7 @@ TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodesWithGraphOutput) { config.has_graph_output = true; // expected count = preserved original (1) + one for each explicit consumer node (3) = 4 - RunEnsureUniqueDQForNodeUnitTest(GetGraphBuilder(config), 4); + RunEnsureUniqueDQForNodeUnitTest(config, 4); } TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodesWithSubgraphConsumer) { @@ -142,7 +151,7 @@ TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodesWithSubgraphConsumer) { config.has_subgraph_consumer = true; // expected count = preserved original (1) + one for each explicit consumer node (3) = 4 - RunEnsureUniqueDQForNodeUnitTest(GetGraphBuilder(config), 4); + RunEnsureUniqueDQForNodeUnitTest(config, 4); } TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodesWithSubgraphConsumerAndGraphOutput) { @@ -153,7 +162,7 @@ TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodesWithSubgraphConsumerAndGr config.has_subgraph_consumer = true; // expected count = preserved original (1) + one for each explicit consumer node (3) = 4 - RunEnsureUniqueDQForNodeUnitTest(GetGraphBuilder(config), 4); + RunEnsureUniqueDQForNodeUnitTest(config, 4); } TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodeInputs) { @@ -162,7 +171,7 @@ TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodeInputs) { config.num_inputs_per_explicit_consumer_node = 5; // expected count = one for each explicit consumer node input (2 * 5), reusing the original one = 10 - RunEnsureUniqueDQForNodeUnitTest(GetGraphBuilder(config), 10); + RunEnsureUniqueDQForNodeUnitTest(config, 10); } TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodeInputsWithGraphOutput) { @@ -172,7 +181,7 @@ TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodeInputsWithGraphOutput) { config.has_graph_output = true; // expected count = preserved original (1) + one for each explicit consumer node input (2 * 5) = 11 - RunEnsureUniqueDQForNodeUnitTest(GetGraphBuilder(config), 11); + RunEnsureUniqueDQForNodeUnitTest(config, 11); } TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodeInputsWithSubgraphConsumer) { @@ -182,7 +191,7 @@ TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodeInputsWithSubgraphConsumer config.has_subgraph_consumer = true; // expected count = preserved original (1) + one for each explicit consumer node input (2 * 5) = 11 - RunEnsureUniqueDQForNodeUnitTest(GetGraphBuilder(config), 11); + RunEnsureUniqueDQForNodeUnitTest(config, 11); } TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodeInputsWithSubgraphConsumerAndGraphOutput) { @@ -193,7 +202,7 @@ TEST(EnsureUniqueDQForNodeUnitTests, DQSharedAmongNodeInputsWithSubgraphConsumer config.has_subgraph_consumer = true; // expected count = preserved original (1) + one for each explicit consumer node input (2 * 5) = 11 - RunEnsureUniqueDQForNodeUnitTest(GetGraphBuilder(config), 11); + RunEnsureUniqueDQForNodeUnitTest(config, 11); } TEST(EnsureUniqueDQForNodeUnitTests, QDQWithMultiConsumerDQNodes) { diff --git a/onnxruntime/test/optimizer/graph_transform_test.cc b/onnxruntime/test/optimizer/graph_transform_test.cc index e3b8900acf0d8..8e1511bcaafeb 100755 --- a/onnxruntime/test/optimizer/graph_transform_test.cc +++ b/onnxruntime/test/optimizer/graph_transform_test.cc @@ -173,6 +173,27 @@ TEST_F(GraphTransformationTests, DequantizeLinearNodeNotEliminated) { ASSERT_EQ(op_to_count["DequantizeLinear"], 25); } +#if !defined(DISABLE_CONTRIB_OPS) +// Test that com.microsoft.DequantizeLinear is not eliminated in CommonSubexpressionElimination +TEST_F(GraphTransformationTests, MsDomainDequantizeLinearNodeNotEliminated) { + constexpr const ORTCHAR_T* model_uri = MODEL_FOLDER "qdq_with_multi_consumer_dq_nodes.fixed.qdq_contrib.onnx"; + std::shared_ptr model; + ASSERT_STATUS_OK(Model::Load(model_uri, model, nullptr, *logger_)); + Graph& graph = model->MainGraph(); + std::map op_to_count = CountOpsInGraph(graph); + ASSERT_EQ(op_to_count["com.microsoft.DequantizeLinear"], 25); + + onnxruntime::GraphTransformerManager graph_transformation_mgr{5}; + ASSERT_STATUS_OK(graph_transformation_mgr.Register(std::make_unique(), + TransformerLevel::Level1)); + ASSERT_STATUS_OK(graph_transformation_mgr.ApplyTransformers(graph, TransformerLevel::Level1, *logger_)); + + // CommonSubexpressionElimination should skip the DequantizeLinear nodes + op_to_count = CountOpsInGraph(graph); + ASSERT_EQ(op_to_count["com.microsoft.DequantizeLinear"], 25); +} +#endif // !defined(DISABLE_CONTRIB_OPS) + TEST_F(GraphTransformationTests, IdentityInputIsGraphOutputNotEliminated) { constexpr const ORTCHAR_T* model_uri = MODEL_FOLDER "scan9_sum.onnx"; std::shared_ptr model; @@ -813,6 +834,37 @@ TEST_F(GraphTransformationTests, ConstantFoldingWithDequantizeLinear) { VerifyConstantFoldingWithDequantizeLinear(expected_op_counts, graph, session_options, *logger_); } +#if !defined(DISABLE_CONTRIB_OPS) +// Test constant folding with a com.microsoft.DequantizeLinear node +TEST_F(GraphTransformationTests, ConstantFoldingWithMsDomainDequantizeLinear) { + constexpr const ORTCHAR_T* model_uri = MODEL_FOLDER "fusion/constant_folding_dequantizelinear.qdq_contrib.onnx"; + std::shared_ptr model; + ASSERT_STATUS_OK(Model::Load(model_uri, model, nullptr, *logger_)); + Graph& graph = model->MainGraph(); + std::map op_to_count = CountOpsInGraph(graph); + ASSERT_EQ(op_to_count["com.microsoft.QuantizeLinear"], 1); + ASSERT_EQ(op_to_count["com.microsoft.DequantizeLinear"], 3); + ASSERT_EQ(op_to_count["Conv"], 1); + + std::unordered_map expected_op_counts = {{"com.microsoft.QuantizeLinear", 1}, + {"com.microsoft.DequantizeLinear", 3}, + {"Conv", 1}}; + + SessionOptions session_options; + // Check DequantizeLinear aren't constant folded for default setting. + VerifyConstantFoldingWithDequantizeLinear(expected_op_counts, graph, session_options, *logger_); + + // set kOrtSessionOptionsDisableQuantQDQ to enable it explicitly + ASSERT_STATUS_OK(session_options.config_options.AddConfigEntry(kOrtSessionOptionsDisableQuantQDQ, "0")); + VerifyConstantFoldingWithDequantizeLinear(expected_op_counts, graph, session_options, *logger_); + + // set SessionOptionsEnableQuantQDQ to disable it + expected_op_counts["com.microsoft.DequantizeLinear"] = 1; + ASSERT_STATUS_OK(session_options.config_options.AddConfigEntry(kOrtSessionOptionsDisableQuantQDQ, "1")); + VerifyConstantFoldingWithDequantizeLinear(expected_op_counts, graph, session_options, *logger_); +} +#endif // !defined(DISABLE_CONTRIB_OPS) + // model with 2 QDQ node units that can be constant folded as they are simple DQ -> Node -> Q where DQ and Node have // single consumer and do not produce graph outputs. Node is deterministic. // there are also other DQ nodes that should be ignored. @@ -838,6 +890,33 @@ TEST_F(GraphTransformationTests, ConstantFoldingQDQNodeUnit) { VerifyConstantFoldingWithDequantizeLinear(expected_op_counts, graph, session_options, *logger_); } +#if !defined(DISABLE_CONTRIB_OPS) +// model with 2 (com.microsoft) QDQ node units that can be constant folded as they are simple DQ -> Node -> Q where +// DQ and Node have single consumer and do not produce graph outputs. Node is deterministic. +// there are also other DQ nodes that should be ignored. +TEST_F(GraphTransformationTests, ConstantFoldingMsDomainQDQNodeUnit) { + constexpr const ORTCHAR_T* model_uri = MODEL_FOLDER "fusion/constant_folding_qdq_node_unit.qdq_contrib.onnx"; + std::shared_ptr model; + ASSERT_STATUS_OK(Model::Load(model_uri, model, nullptr, *logger_)); + Graph& graph = model->MainGraph(); + std::map op_to_count = CountOpsInGraph(graph); + ASSERT_EQ(op_to_count["com.microsoft.QuantizeLinear"], 3); + ASSERT_EQ(op_to_count["com.microsoft.DequantizeLinear"], 4); + ASSERT_EQ(op_to_count["Unsqueeze"], 1); + ASSERT_EQ(op_to_count["Transpose"], 1); + + SessionOptions session_options; + + // 2 QDQ node units should be constant folded and go away + std::unordered_map expected_op_counts = {{"com.microsoft.QuantizeLinear", 1}, + {"com.microsoft.DequantizeLinear", 2}, + {"Transpose", 0}, + {"Unsqueeze", 0}}; + + VerifyConstantFoldingWithDequantizeLinear(expected_op_counts, graph, session_options, *logger_); +} +#endif // !defined(DISABLE_CONTRIB_OPS) + // Simple QDQ Node Unit but shouldn't be constant folded as the node in the middle produces a graph output TEST_F(GraphTransformationTests, ConstantFoldingQDQNodeUnitGraphOutput) { constexpr const ORTCHAR_T* model_uri = MODEL_FOLDER "fusion/constant_folding_qdq_node_unit.graph_output.onnx"; @@ -857,6 +936,29 @@ TEST_F(GraphTransformationTests, ConstantFoldingQDQNodeUnitGraphOutput) { VerifyConstantFoldingWithDequantizeLinear(expected_op_counts, graph, session_options, *logger_); } +#if !defined(DISABLE_CONTRIB_OPS) +// Simple (com.microsoft) QDQ Node Unit but shouldn't be constant folded as the node in the middle produces a +// graph output +TEST_F(GraphTransformationTests, ConstantFoldingMsDomainQDQNodeUnitGraphOutput) { + constexpr const ORTCHAR_T* model_uri = + MODEL_FOLDER "fusion/constant_folding_qdq_node_unit.graph_output.qdq_contrib.onnx"; + std::shared_ptr model; + ASSERT_STATUS_OK(Model::Load(model_uri, model, nullptr, *logger_)); + Graph& graph = model->MainGraph(); + std::map op_to_count = CountOpsInGraph(graph); + ASSERT_EQ(op_to_count["com.microsoft.QuantizeLinear"], 2); + ASSERT_EQ(op_to_count["com.microsoft.DequantizeLinear"], 3); + ASSERT_EQ(op_to_count["Unsqueeze"], 1); + + std::unordered_map expected_op_counts = {{"com.microsoft.QuantizeLinear", 2}, + {"com.microsoft.DequantizeLinear", 3}, + {"Unsqueeze", 1}}; + + SessionOptions session_options; + VerifyConstantFoldingWithDequantizeLinear(expected_op_counts, graph, session_options, *logger_); +} +#endif // !defined(DISABLE_CONTRIB_OPS) + TEST_F(GraphTransformationTests, ConstantFolding_RemoveDanglingInputNodesToConstantFoldedNode) { constexpr const ORTCHAR_T* model_uri = MODEL_FOLDER "fusion/constant_folding_remove_dangling_inputs.onnx"; std::shared_ptr model; diff --git a/onnxruntime/test/optimizer/graph_transform_test_builder.h b/onnxruntime/test/optimizer/graph_transform_test_builder.h index 361903c386dd5..743faee3ee2a5 100644 --- a/onnxruntime/test/optimizer/graph_transform_test_builder.h +++ b/onnxruntime/test/optimizer/graph_transform_test_builder.h @@ -276,23 +276,27 @@ class ModelTestBuilder { AddQuantizeLinearNode(NodeArg* input_arg, float input_scale, T input_zero_point, - NodeArg* output_arg) { + NodeArg* output_arg, + bool use_ms_domain = false) { std::vector input_args; input_args.push_back(input_arg); input_args.push_back(MakeScalarInitializer(input_scale)); input_args.push_back(MakeScalarInitializer(input_zero_point)); - return AddNode("QuantizeLinear", input_args, {output_arg}); + std::string domain = use_ms_domain ? kMSDomain : ""; + return AddNode("QuantizeLinear", input_args, {output_arg}, domain); } Node& AddQuantizeLinearNode(NodeArg* input_arg, float input_scale, - NodeArg* output_arg) { + NodeArg* output_arg, + bool use_ms_domain = false) { std::vector input_args; input_args.push_back(input_arg); input_args.push_back(MakeScalarInitializer(input_scale)); - return AddNode("QuantizeLinear", input_args, {output_arg}); + std::string domain = use_ms_domain ? kMSDomain : ""; + return AddNode("QuantizeLinear", input_args, {output_arg}, domain); } template @@ -300,23 +304,27 @@ class ModelTestBuilder { AddDequantizeLinearNode(NodeArg* input_arg, float input_scale, T input_zero_point, - NodeArg* output_arg) { + NodeArg* output_arg, + bool use_ms_domain = false) { std::vector input_args; input_args.push_back(input_arg); input_args.push_back(MakeScalarInitializer(input_scale)); input_args.push_back(MakeScalarInitializer(input_zero_point)); - return AddNode("DequantizeLinear", input_args, {output_arg}); + std::string domain = use_ms_domain ? kMSDomain : ""; + return AddNode("DequantizeLinear", input_args, {output_arg}, domain); } Node& AddDequantizeLinearNode(NodeArg* input_arg, float input_scale, - NodeArg* output_arg) { + NodeArg* output_arg, + bool use_ms_domain = false) { std::vector input_args; input_args.push_back(input_arg); input_args.push_back(MakeScalarInitializer(input_scale)); - return AddNode("DequantizeLinear", input_args, {output_arg}); + std::string domain = use_ms_domain ? kMSDomain : ""; + return AddNode("DequantizeLinear", input_args, {output_arg}, domain); } template diff --git a/onnxruntime/test/optimizer/qdq_test_utils.cc b/onnxruntime/test/optimizer/qdq_test_utils.cc index 88118c177980d..24cace43c6967 100644 --- a/onnxruntime/test/optimizer/qdq_test_utils.cc +++ b/onnxruntime/test/optimizer/qdq_test_utils.cc @@ -3,6 +3,7 @@ #include "qdq_test_utils.h" #include +#include #include "core/common/common.h" namespace onnxruntime { @@ -34,10 +35,10 @@ GetQDQTestCaseFn BuildQDQConcatTestCase(const std::vector>& int64_t axis, bool has_input_float, bool has_input_int8, - bool has_output_int8) { - return [input_shapes, axis, - has_input_float, has_input_int8, has_output_int8]( - ModelTestBuilder& builder) { + bool has_output_int8, + bool use_contrib_qdq) { + return [input_shapes, axis, has_input_float, has_input_int8, + has_output_int8, use_contrib_qdq](ModelTestBuilder& builder) { auto input_count = input_shapes.size(); std::vector input_args; std::vector q_input_args; @@ -46,9 +47,9 @@ GetQDQTestCaseFn BuildQDQConcatTestCase(const std::vector>& if (i == 0 && has_input_float) { q_input_args.push_back(input_args.back()); } else if (i == 0 && has_input_int8) { - q_input_args.push_back(AddQDQNodePair(builder, input_args.back(), 0.05f, 1)); + q_input_args.push_back(AddQDQNodePair(builder, input_args.back(), 0.05f, 1, use_contrib_qdq)); } else { - q_input_args.push_back(AddQDQNodePair(builder, input_args.back(), 0.05f, 128)); + q_input_args.push_back(AddQDQNodePair(builder, input_args.back(), 0.05f, 128, use_contrib_qdq)); } } auto* concat_output = builder.MakeIntermediate(); @@ -57,15 +58,15 @@ GetQDQTestCaseFn BuildQDQConcatTestCase(const std::vector>& auto* q_concat_output = builder.MakeIntermediate(); if (has_output_int8) { - builder.AddQuantizeLinearNode(concat_output, 0.05f, 1, q_concat_output); + builder.AddQuantizeLinearNode(concat_output, 0.05f, 1, q_concat_output, use_contrib_qdq); auto* output_arg = builder.MakeOutput(); - builder.AddDequantizeLinearNode(q_concat_output, 0.05f, 1, output_arg); + builder.AddDequantizeLinearNode(q_concat_output, 0.05f, 1, output_arg, use_contrib_qdq); } else { - builder.AddQuantizeLinearNode(concat_output, 0.05f, 128, q_concat_output); + builder.AddQuantizeLinearNode(concat_output, 0.05f, 128, q_concat_output, use_contrib_qdq); auto* output_arg = builder.MakeOutput(); - builder.AddDequantizeLinearNode(q_concat_output, 0.05f, 128, output_arg); + builder.AddDequantizeLinearNode(q_concat_output, 0.05f, 128, output_arg, use_contrib_qdq); } }; } @@ -143,12 +144,22 @@ GetQDQTestCaseFn BuildQDQMatMulTestCase(const std::vector& input1_shape }; } -std::vector GetNodeOpTypesInTopologicalOrder(const Graph& graph) { +std::vector GetNodeOpTypesInTopologicalOrder(const Graph& graph, bool include_domain) { std::vector op_types{}; GraphViewer graph_viewer{graph}; const auto& ordering = graph_viewer.GetNodesInTopologicalOrder(); for (const auto node_idx : ordering) { - op_types.push_back(graph.GetNode(node_idx)->OpType()); + const auto* node = graph.GetNode(node_idx); + std::string full_op_type; + + if (include_domain) { + const std::string& domain = node->Domain(); + full_op_type = domain.empty() ? node->OpType() : domain + "." + node->OpType(); + } else { + full_op_type = node->OpType(); + } + + op_types.push_back(std::move(full_op_type)); } return op_types; } diff --git a/onnxruntime/test/optimizer/qdq_test_utils.h b/onnxruntime/test/optimizer/qdq_test_utils.h index 7f6865a89e6e6..2008d96539dca 100644 --- a/onnxruntime/test/optimizer/qdq_test_utils.h +++ b/onnxruntime/test/optimizer/qdq_test_utils.h @@ -21,21 +21,22 @@ using GetQDQTestCaseFn = std::function; template typename std::enable_if::value, NodeArg*>::type -AddQDQNodePair(ModelTestBuilder& builder, NodeArg* q_input, float scale, T zp = T()) { +AddQDQNodePair(ModelTestBuilder& builder, NodeArg* q_input, float scale, T zp = T(), bool use_ms_domain = false) { auto* q_output = builder.MakeIntermediate(); auto* dq_output = builder.MakeIntermediate(); - builder.AddQuantizeLinearNode(q_input, scale, zp, q_output); - builder.AddDequantizeLinearNode(q_output, scale, zp, dq_output); + builder.AddQuantizeLinearNode(q_input, scale, zp, q_output, use_ms_domain); + builder.AddDequantizeLinearNode(q_output, scale, zp, dq_output, use_ms_domain); return dq_output; } template typename std::enable_if::value, NodeArg*>::type -AddQDQNodePairWithOutputAsGraphOutput(ModelTestBuilder& builder, NodeArg* q_input, float scale, T zp = T()) { +AddQDQNodePairWithOutputAsGraphOutput(ModelTestBuilder& builder, NodeArg* q_input, float scale, T zp = T(), + bool use_ms_domain = false) { auto* q_output = builder.MakeIntermediate(); auto* dq_output = builder.MakeOutput(); - builder.AddQuantizeLinearNode(q_input, scale, zp, q_output); - builder.AddDequantizeLinearNode(q_output, scale, zp, dq_output); + builder.AddQuantizeLinearNode(q_input, scale, zp, q_output, use_ms_domain); + builder.AddDequantizeLinearNode(q_output, scale, zp, dq_output, use_ms_domain); return dq_output; } @@ -92,8 +93,10 @@ GetQDQTestCaseFn BuildQDQConvTransposeTestCase(const std::vector& input } template -GetQDQTestCaseFn BuildQDQConvTestCase(const std::vector& input_shape, const std::vector& weights_shape) { - return [input_shape, weights_shape](ModelTestBuilder& builder) { +GetQDQTestCaseFn BuildQDQConvTestCase(const std::vector& input_shape, + const std::vector& weights_shape, + bool use_contrib_qdq = false) { + return [input_shape, weights_shape, use_contrib_qdq](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); @@ -119,34 +122,38 @@ GetQDQTestCaseFn BuildQDQConvTestCase(const std::vector& input_shape, c auto* weight = builder.MakeInitializer(weights_shape, weight_min_value, weight_max_value); builder.AddDequantizeLinearNode(weight, .03f, (weight_min_value + weight_max_value) / 2 + 1, - dq_w_output); + dq_w_output, + use_contrib_qdq); auto* dq_bias_output = builder.MakeIntermediate(); auto* bias = builder.MakeInitializer({weights_shape[0]}, static_cast(0), static_cast(127)); builder.AddDequantizeLinearNode(bias, .0012f, 0, - dq_bias_output); + dq_bias_output, + use_contrib_qdq); auto* conv_output = builder.MakeIntermediate(); auto* dq_output = AddQDQNodePair(builder, input_arg, .04f, - (input_min_value + input_max_value) / 2 + 1); + (input_min_value + input_max_value) / 2 + 1, use_contrib_qdq); builder.AddNode("Conv", {dq_output, dq_w_output, dq_bias_output}, {conv_output}); auto* q_output = builder.MakeIntermediate(); builder.AddQuantizeLinearNode(conv_output, .039f, (OutputLimits::min() + OutputLimits::max()) / 2 + 1, - q_output); + q_output, + use_contrib_qdq); builder.AddDequantizeLinearNode(q_output, .039f, (OutputLimits::min() + OutputLimits::max()) / 2 + 1, - output_arg); + output_arg, + use_contrib_qdq); }; } template GetQDQTestCaseFn BuildQDQAveragePoolTestCase(const std::vector& input_shape, - int64_t count_include_pad = 0) { - return [input_shape, count_include_pad](ModelTestBuilder& builder) { + int64_t count_include_pad = 0, bool use_contrib_qdq = false) { + return [input_shape, count_include_pad, use_contrib_qdq](ModelTestBuilder& builder) { #ifdef USE_NNAPI // NNAPI require consistent scales/ZPs for DQ -> Pool -> Q float dq_scale = 0.0038f; @@ -167,7 +174,7 @@ GetQDQTestCaseFn BuildQDQAveragePoolTestCase(const std::vector& input_s auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); // add QDQ + AveragePool - auto* dq_output = AddQDQNodePair(builder, input_arg, dq_scale, dq_zp); + auto* dq_output = AddQDQNodePair(builder, input_arg, dq_scale, dq_zp, use_contrib_qdq); auto* averagepool_output = builder.MakeIntermediate(); Node& pool_node = builder.AddNode("AveragePool", {dq_output}, {averagepool_output}); std::vector pads((input_shape.size() - 2) * 2, 1); @@ -183,11 +190,13 @@ GetQDQTestCaseFn BuildQDQAveragePoolTestCase(const std::vector& input_s builder.AddQuantizeLinearNode(averagepool_output, pool_output_scale, pool_output_zp, - q_output); + q_output, + use_contrib_qdq); builder.AddDequantizeLinearNode(q_output, q_scale, q_zp, - output_arg); + output_arg, + use_contrib_qdq); }; } @@ -227,8 +236,9 @@ GetQDQTestCaseFn BuildQDQMaxPoolTestCase(const std::vector& input_shape } template -GetQDQTestCaseFn BuildQDQGlobalAveragePoolTestCase(const std::vector& input_shape) { - return [input_shape](ModelTestBuilder& builder) { +GetQDQTestCaseFn BuildQDQGlobalAveragePoolTestCase(const std::vector& input_shape, + bool use_contrib_qdq = false) { + return [input_shape, use_contrib_qdq](ModelTestBuilder& builder) { float dq_scale = 0.0035f; float pool_output_scale = 0.0038f; float q_scale = 0.0039f; @@ -239,7 +249,7 @@ GetQDQTestCaseFn BuildQDQGlobalAveragePoolTestCase(const std::vector& i auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); // add QDQ + GlobalAveragePool - auto* dq_output = AddQDQNodePair(builder, input_arg, dq_scale, dq_zp); + auto* dq_output = AddQDQNodePair(builder, input_arg, dq_scale, dq_zp, use_contrib_qdq); auto* globalaveragepool_output = builder.MakeIntermediate(); builder.AddNode("GlobalAveragePool", {dq_output}, {globalaveragepool_output}); @@ -248,11 +258,13 @@ GetQDQTestCaseFn BuildQDQGlobalAveragePoolTestCase(const std::vector& i builder.AddQuantizeLinearNode(globalaveragepool_output, pool_output_scale, pool_output_zp, - q_output); + q_output, + use_contrib_qdq); builder.AddDequantizeLinearNode(q_output, q_scale, q_zp, - output_arg); + output_arg, + use_contrib_qdq); }; } @@ -262,10 +274,11 @@ GetQDQTestCaseFn BuildQDQResizeTestCase(const std::vector& input_shape, const std::string& mode = "nearest", const std::string& coordinate_transformation_mode = "half_pixel", const std::string& nearest_mode = "round_prefer_floor", - bool add_dq_output_float = false) { + bool add_dq_output_float = false, + bool use_contrib_qdq = false) { static_assert(std::is_same_v || std::is_same_v); return [input_shape, sizes_data, mode, coordinate_transformation_mode, - nearest_mode, add_dq_output_float](ModelTestBuilder& builder) { + nearest_mode, add_dq_output_float, use_contrib_qdq](ModelTestBuilder& builder) { auto* input1_arg = builder.MakeInput(input_shape, std::numeric_limits::min(), std::numeric_limits::max()); @@ -276,7 +289,7 @@ GetQDQTestCaseFn BuildQDQResizeTestCase(const std::vector& input_shape, // add DQ auto* dq_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input1_arg, .003f, 1, dq_output); + builder.AddDequantizeLinearNode(input1_arg, .003f, 1, dq_output, use_contrib_qdq); // add Resize auto* resize_output = builder.MakeIntermediate(); @@ -292,21 +305,21 @@ GetQDQTestCaseFn BuildQDQResizeTestCase(const std::vector& input_shape, if (add_dq_output_float) { // add Q output_arg = builder.MakeIntermediate(); - builder.AddQuantizeLinearNode(resize_output, .003f, 1, output_arg); + builder.AddQuantizeLinearNode(resize_output, .003f, 1, output_arg, use_contrib_qdq); auto* f_dq_output = builder.MakeOutput(); - builder.AddDequantizeLinearNode(output_arg, .003f, 1, f_dq_output); + builder.AddDequantizeLinearNode(output_arg, .003f, 1, f_dq_output, use_contrib_qdq); } else { output_arg = builder.MakeOutput(); // add Q - builder.AddQuantizeLinearNode(resize_output, .003f, 1, output_arg); + builder.AddQuantizeLinearNode(resize_output, .003f, 1, output_arg, use_contrib_qdq); } }; } template GetQDQTestCaseFn BuildBinaryOpTestCase(const std::vector& input_shape, - const std::string& op_type) { - return [input_shape, op_type](ModelTestBuilder& builder) { + const std::string& op_type, bool use_contrib_qdq = false) { + return [input_shape, op_type, use_contrib_qdq](ModelTestBuilder& builder) { auto* input1_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* input2_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); @@ -329,11 +342,13 @@ GetQDQTestCaseFn BuildBinaryOpTestCase(const std::vector& input_shape, builder.AddQuantizeLinearNode(input1_arg, q_scale, std::numeric_limits::max() / 2, - q1_output); + q1_output, + use_contrib_qdq); builder.AddDequantizeLinearNode(q1_output, op_input_scale, std::numeric_limits::max() / 2, - dq1_output); + dq1_output, + use_contrib_qdq); // add QDQ 2 auto* q2_output = builder.MakeIntermediate(); @@ -341,11 +356,13 @@ GetQDQTestCaseFn BuildBinaryOpTestCase(const std::vector& input_shape, builder.AddQuantizeLinearNode(input2_arg, q_scale, std::numeric_limits::max() / 2, - q2_output); + q2_output, + use_contrib_qdq); builder.AddDequantizeLinearNode(q2_output, op_input_scale, std::numeric_limits::max() / 2, - dq2_output); + dq2_output, + use_contrib_qdq); // add binary operator auto* binary_op_output = builder.MakeIntermediate(); @@ -356,26 +373,29 @@ GetQDQTestCaseFn BuildBinaryOpTestCase(const std::vector& input_shape, builder.AddQuantizeLinearNode(binary_op_output, op_output_scale, std::numeric_limits::max() / 2, - q3_output); + q3_output, + use_contrib_qdq); builder.AddDequantizeLinearNode(q3_output, dq_scale, std::numeric_limits::max() / 2, - output_arg); + output_arg, + use_contrib_qdq); }; } template GetQDQTestCaseFn BuildConsolidationTestCase( const std::vector& input_shape, - const int64_t& axis) { - return [input_shape, axis](ModelTestBuilder& builder) { + const int64_t& axis, + bool use_contrib_qdq = false) { + return [input_shape, axis, use_contrib_qdq](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, std::numeric_limits::min(), std::numeric_limits::max()); InputType dq_zp = std::numeric_limits::max() / 2; OutputType q_zp = std::numeric_limits::max() / 2; auto* upper_dq_output = builder.MakeIntermediate(); auto* upper_q_output = builder.MakeIntermediate(); - builder.AddQuantizeLinearNode(input_arg, .003f, q_zp, upper_q_output); - builder.AddDequantizeLinearNode(upper_q_output, .003f, dq_zp, upper_dq_output); + builder.AddQuantizeLinearNode(input_arg, .003f, q_zp, upper_q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(upper_q_output, .003f, dq_zp, upper_dq_output, use_contrib_qdq); // add Split @@ -392,21 +412,22 @@ GetQDQTestCaseFn BuildConsolidationTestCase( auto* lower_q_output_1 = builder.MakeIntermediate(); auto* lower_q_output_2 = builder.MakeIntermediate(); auto* lower_q_output_3 = builder.MakeIntermediate(); - builder.AddQuantizeLinearNode(split_output_1, .003f, q_zp, lower_q_output_1); - builder.AddQuantizeLinearNode(split_output_2, .003f, q_zp, lower_q_output_2); - builder.AddQuantizeLinearNode(split_output_3, .003f, q_zp, lower_q_output_3); + builder.AddQuantizeLinearNode(split_output_1, .003f, q_zp, lower_q_output_1, use_contrib_qdq); + builder.AddQuantizeLinearNode(split_output_2, .003f, q_zp, lower_q_output_2, use_contrib_qdq); + builder.AddQuantizeLinearNode(split_output_3, .003f, q_zp, lower_q_output_3, use_contrib_qdq); auto* q_split_output_1 = builder.MakeOutput(); auto* q_split_output_2 = builder.MakeOutput(); auto* q_split_output_3 = builder.MakeOutput(); - builder.AddDequantizeLinearNode(lower_q_output_1, .003f, dq_zp, q_split_output_1); - builder.AddDequantizeLinearNode(lower_q_output_2, .003f, dq_zp, q_split_output_2); - builder.AddDequantizeLinearNode(lower_q_output_3, .003f, dq_zp, q_split_output_3); + builder.AddDequantizeLinearNode(lower_q_output_1, .003f, dq_zp, q_split_output_1, use_contrib_qdq); + builder.AddDequantizeLinearNode(lower_q_output_2, .003f, dq_zp, q_split_output_2, use_contrib_qdq); + builder.AddDequantizeLinearNode(lower_q_output_3, .003f, dq_zp, q_split_output_3, use_contrib_qdq); }; } template GetQDQTestCaseFn BuildDoubleQDQTestCases(Type1 zp_1, Type2 zp_2, Type3 zp_3, Type4 zp_4, - float scale_1, float scale_2, float scale_3, float scale_4) { + float scale_1, float scale_2, float scale_3, float scale_4, + bool use_contrib_qdq = false) { return [=](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput( {11, 22, 33, 44}, @@ -416,15 +437,15 @@ GetQDQTestCaseFn BuildDoubleQDQTestCases(Type1 zp_1, Type2 zp_2, Type3 zp_3, Typ NodeArg* dq1_output = builder.MakeIntermediate(); NodeArg* q2_output = builder.MakeIntermediate(); NodeArg* dq2_output = builder.MakeOutput(); - builder.AddQuantizeLinearNode(input_arg, scale_1, zp_1, q1_output); - builder.AddDequantizeLinearNode(q1_output, scale_2, zp_2, dq1_output); - builder.AddQuantizeLinearNode(dq1_output, scale_3, zp_3, q2_output); - builder.AddDequantizeLinearNode(q2_output, scale_4, zp_4, dq2_output); + builder.AddQuantizeLinearNode(input_arg, scale_1, zp_1, q1_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q1_output, scale_2, zp_2, dq1_output, use_contrib_qdq); + builder.AddQuantizeLinearNode(dq1_output, scale_3, zp_3, q2_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q2_output, scale_4, zp_4, dq2_output, use_contrib_qdq); }; } template -GetQDQTestCaseFn BuildDoubleQDQWithoutLastOutput(int output_index) { +GetQDQTestCaseFn BuildDoubleQDQWithoutLastOutput(int output_index, bool use_contrib_qdq = false) { return [=](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput({2, 3, 4}, std::numeric_limits::min(), std::numeric_limits::max()); T zp = (std::numeric_limits::max() - std::numeric_limits::min()) / 2; @@ -437,18 +458,19 @@ GetQDQTestCaseFn BuildDoubleQDQWithoutLastOutput(int output_index) { outputs[i] = builder.MakeIntermediate(); } } - builder.AddQuantizeLinearNode(input_arg, scale, zp, outputs[0]); - builder.AddDequantizeLinearNode(outputs[0], scale, zp, outputs[1]); - builder.AddQuantizeLinearNode(outputs[1], scale, zp, outputs[2]); - builder.AddDequantizeLinearNode(outputs[2], scale, zp, outputs[3]); + builder.AddQuantizeLinearNode(input_arg, scale, zp, outputs[0], use_contrib_qdq); + builder.AddDequantizeLinearNode(outputs[0], scale, zp, outputs[1], use_contrib_qdq); + builder.AddQuantizeLinearNode(outputs[1], scale, zp, outputs[2], use_contrib_qdq); + builder.AddDequantizeLinearNode(outputs[2], scale, zp, outputs[3], use_contrib_qdq); }; } template GetQDQTestCaseFn BuildQDQSplitTestCase( const std::vector& input_shape, - const int64_t& axis) { - return [input_shape, axis](ModelTestBuilder& builder) { + const int64_t& axis, + bool use_contrib_qdq = false) { + return [input_shape, axis, use_contrib_qdq](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, std::numeric_limits::min(), std::numeric_limits::max()); @@ -456,7 +478,7 @@ GetQDQTestCaseFn BuildQDQSplitTestCase( InputType dq_zp = std::numeric_limits::max() / 2; OutputType q_zp = std::numeric_limits::max() / 2; auto* dq_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input_arg, .003f, dq_zp, dq_output); + builder.AddDequantizeLinearNode(input_arg, .003f, dq_zp, dq_output, use_contrib_qdq); // add Split @@ -473,17 +495,21 @@ GetQDQTestCaseFn BuildQDQSplitTestCase( auto* q_split_output_1 = builder.MakeOutput(); auto* q_split_output_2 = builder.MakeOutput(); auto* q_split_output_3 = builder.MakeOutput(); - builder.AddQuantizeLinearNode(split_output_1, .003f, q_zp, q_split_output_1); // Model input (node_token_1) - builder.AddQuantizeLinearNode(split_output_2, .003f, q_zp, q_split_output_2); // Model input (node_token_2) - builder.AddQuantizeLinearNode(split_output_3, .003f, q_zp, q_split_output_3); + builder.AddQuantizeLinearNode(split_output_1, .003f, q_zp, q_split_output_1, + use_contrib_qdq); // Model input (node_token_1) + builder.AddQuantizeLinearNode(split_output_2, .003f, q_zp, q_split_output_2, + use_contrib_qdq); // Model input (node_token_2) + builder.AddQuantizeLinearNode(split_output_3, .003f, q_zp, q_split_output_3, + use_contrib_qdq); }; } template GetQDQTestCaseFn BuildQDQWhereTestCase( const std::vector& cond_shape, const std::vector& x_shape, - const std::vector& y_shape) { - return [cond_shape, x_shape, y_shape](ModelTestBuilder& builder) { + const std::vector& y_shape, + bool use_contrib_qdq = false) { + return [cond_shape, x_shape, y_shape, use_contrib_qdq](ModelTestBuilder& builder) { auto* input_cond_arg = builder.MakeInputBool(cond_shape); auto* input_x_arg = builder.MakeInput(x_shape, std::numeric_limits::min(), @@ -495,8 +521,8 @@ GetQDQTestCaseFn BuildQDQWhereTestCase( constexpr float scale = 0.003f; auto* dq_x_output = builder.MakeIntermediate(); auto* dq_y_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input_x_arg, scale, zp, dq_x_output); - builder.AddDequantizeLinearNode(input_y_arg, scale, zp, dq_y_output); + builder.AddDequantizeLinearNode(input_x_arg, scale, zp, dq_x_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(input_y_arg, scale, zp, dq_y_output, use_contrib_qdq); // add Where auto* where_output = builder.MakeIntermediate(); @@ -504,15 +530,17 @@ GetQDQTestCaseFn BuildQDQWhereTestCase( // add Q auto* q_where_output = builder.MakeOutput(); - builder.AddQuantizeLinearNode(where_output, scale, zp, q_where_output); // Model input (node_token_1) + builder.AddQuantizeLinearNode(where_output, scale, zp, q_where_output, + use_contrib_qdq); // Model input (node_token_1) }; } template GetQDQTestCaseFn BuildQDQTransposeTestCase( const std::vector& input_shape, - const std::vector& perms) { - return [input_shape, perms](ModelTestBuilder& builder) { + const std::vector& perms, + bool use_contrib_qdq = false) { + return [input_shape, perms, use_contrib_qdq](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, std::numeric_limits::min(), std::numeric_limits::max()); @@ -523,7 +551,7 @@ GetQDQTestCaseFn BuildQDQTransposeTestCase( // add DQ auto* dq_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input_arg, .003f, dq_zp, dq_output); + builder.AddDequantizeLinearNode(input_arg, .003f, dq_zp, dq_output, use_contrib_qdq); // add Transpose auto* transpose_output = builder.MakeIntermediate(); @@ -531,7 +559,7 @@ GetQDQTestCaseFn BuildQDQTransposeTestCase( transpose_node.AddAttribute("perm", perms); // add Q - builder.AddQuantizeLinearNode(transpose_output, .003f, q_zp, output_arg); + builder.AddQuantizeLinearNode(transpose_output, .003f, q_zp, output_arg, use_contrib_qdq); }; } @@ -567,7 +595,8 @@ GetQDQTestCaseFn BuildQDQConcatTestCase(const std::vector>& int64_t axis, bool has_input_float = false, bool has_input_int8 = false, - bool has_output_int8 = false); + bool has_output_int8 = false, + bool use_contrib_qdq = false); GetQDQTestCaseFn BuildQDQConcatTestCaseUnsupportedInputScaleZp(); @@ -639,7 +668,7 @@ GetQDQTestCaseFn BuildQDQGemmTestCase(const std::vector& input1_shape, }; } -std::vector GetNodeOpTypesInTopologicalOrder(const Graph& graph); +std::vector GetNodeOpTypesInTopologicalOrder(const Graph& graph, bool include_domain = false); } // namespace test } // namespace onnxruntime diff --git a/onnxruntime/test/optimizer/qdq_transformer_test.cc b/onnxruntime/test/optimizer/qdq_transformer_test.cc index be399ce8db60d..0dfeb599d0ae3 100644 --- a/onnxruntime/test/optimizer/qdq_transformer_test.cc +++ b/onnxruntime/test/optimizer/qdq_transformer_test.cc @@ -34,6 +34,18 @@ #include "core/providers/shared/node_unit/node_unit.h" #endif // #ifdef USE_NNAPI +struct QDQOpKeys { + const char* quantize_linear; + const char* dequantize_linear; +}; + +constexpr QDQOpKeys GetQDQOpKeys(bool use_contrib_qdq) { + if (use_contrib_qdq) { + return {"com.microsoft.QuantizeLinear", "com.microsoft.DequantizeLinear"}; + } + return {"QuantizeLinear", "DequantizeLinear"}; +} + namespace onnxruntime { namespace test { @@ -41,25 +53,29 @@ namespace test { template void QDQTransformerConvTests() { - auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape) { + auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, + bool use_contrib_qdq = false) { auto check_graph = [&](InferenceSessionWrapper& session) { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); + auto op_to_count = CountOpsInGraph(session.GetGraph()); if constexpr (std::is_same::value && std::is_same::value && (std::is_same::value || QDQIsInt8Allowed() && std::is_same::value)) { EXPECT_EQ(op_to_count["QLinearConv"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); } else { EXPECT_EQ(op_to_count["Conv"], 1); EXPECT_EQ(op_to_count["QLinearConv"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 4); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 4); } }; - TransformerTester(BuildQDQConvTestCase(input_shape, weights_shape), + TransformerTester(BuildQDQConvTestCase(input_shape, weights_shape, + use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -67,7 +83,8 @@ void QDQTransformerConvTests() { 0.01 /*per_sample_tolerance*/, 0.01 /*relative_per_sample_tolerance*/, std::make_unique(QDQIsInt8Allowed())); - TransformerTester(BuildQDQConvTestCase(input_shape, weights_shape), + TransformerTester(BuildQDQConvTestCase(input_shape, weights_shape, + use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -75,7 +92,8 @@ void QDQTransformerConvTests() { 0.01 /*per_sample_tolerance*/, 0.01 /*relative_per_sample_tolerance*/, std::make_unique(QDQIsInt8Allowed())); - TransformerTester(BuildQDQConvTestCase(input_shape, weights_shape), + TransformerTester(BuildQDQConvTestCase(input_shape, weights_shape, + use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -91,6 +109,7 @@ void QDQTransformerConvTests() { test_case({1, 23, 13, 13}, {30, 23, 3, 3}); test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}); test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}); + test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, true); // Use com.microsoft QDQ ops } TEST(QDQTransformerTests, Conv_U8X8U8) { @@ -123,7 +142,8 @@ TEST(QDQTransformerTests, Conv_S8X8S8) { } TEST(QDQTransformerTests, ConvMaxPoolReshape_UInt8) { - auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, int opset_version) { + auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, + int opset_version, bool use_contrib_qdq = false) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); @@ -132,12 +152,12 @@ TEST(QDQTransformerTests, ConvMaxPoolReshape_UInt8) { // add QDQ + Conv auto* dq_w_output = builder.MakeIntermediate(); auto* conv_output = builder.MakeIntermediate(); - auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 129); - builder.AddDequantizeLinearNode(weight, .003f, 118, dq_w_output); + auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 129, use_contrib_qdq); + builder.AddDequantizeLinearNode(weight, .003f, 118, dq_w_output, use_contrib_qdq); builder.AddConvNode(dq_conv_output, dq_w_output, conv_output); // add QDQ + MaxPool - auto* dq_maxpool_output = AddQDQNodePair(builder, conv_output, .0039f, 135); + auto* dq_maxpool_output = AddQDQNodePair(builder, conv_output, .0039f, 135, use_contrib_qdq); auto* maxpool_output = builder.MakeIntermediate(); Node& pool_node = builder.AddNode("MaxPool", {dq_maxpool_output}, {maxpool_output}); std::vector pads((weights_shape.size() - 2) * 2, 1); @@ -146,22 +166,23 @@ TEST(QDQTransformerTests, ConvMaxPoolReshape_UInt8) { pool_node.AddAttribute("kernel_shape", kernel_shape); // add QDQ + Reshape - auto* dq_reshape_output = AddQDQNodePair(builder, maxpool_output, .0039f, 135); + auto* dq_reshape_output = AddQDQNodePair(builder, maxpool_output, .0039f, 135, use_contrib_qdq); auto* reshape_shape = builder.Make1DInitializer({-1}); auto* reshape_output = builder.MakeIntermediate(); builder.AddNode("Reshape", {dq_reshape_output, reshape_shape}, {reshape_output}); // add Q - builder.AddQuantizeLinearNode(reshape_output, .0039f, 135, output_arg); + builder.AddQuantizeLinearNode(reshape_output, .0039f, 135, output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); auto op_to_count = CountOpsInGraph(session.GetGraph()); EXPECT_EQ(op_to_count["QLinearConv"], 1); EXPECT_EQ(op_to_count["MaxPool"], 1); EXPECT_EQ(op_to_count["Reshape"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], opset_version < 12 ? 2 : 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], opset_version < 12 ? 1 : 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], opset_version < 12 ? 2 : 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], opset_version < 12 ? 1 : 0); }; TransformerTester(build_test_case, @@ -175,20 +196,25 @@ TEST(QDQTransformerTests, ConvMaxPoolReshape_UInt8) { test_case({1, 12, 37}, {32, 12, 5}, 12); test_case({1, 12, 37}, {32, 12, 5}, 18); test_case({1, 12, 37}, {32, 12, 5}, 19); + test_case({1, 12, 37}, {32, 12, 5}, 11, true); // Use com.microsoft QDQ ops test_case({1, 23, 13, 13}, {30, 23, 3, 3}, 11); test_case({1, 23, 13, 13}, {30, 23, 3, 3}, 12); test_case({1, 23, 13, 13}, {30, 23, 3, 3}, 18); test_case({1, 23, 13, 13}, {30, 23, 3, 3}, 19); + test_case({1, 23, 13, 13}, {30, 23, 3, 3}, 12, true); // Use com.microsoft QDQ ops test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, 11); test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, 12); test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, 18); test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, 19); + test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, 18, true); // Use com.microsoft QDQ ops + test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, 19, true); // Use com.microsoft QDQ ops } TEST(QDQTransformerTests, ConvMaxPoolReshape_Int8) { - auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape) { + auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, + bool use_contrib_qdq = false) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); @@ -197,12 +223,12 @@ TEST(QDQTransformerTests, ConvMaxPoolReshape_Int8) { // add QDQ + Conv auto* dq_w_output = builder.MakeIntermediate(); auto* conv_output = builder.MakeIntermediate(); - auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 1); - builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output); + auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 1, use_contrib_qdq); + builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output, use_contrib_qdq); builder.AddConvNode(dq_conv_output, dq_w_output, conv_output); // add QDQ + MaxPool - auto* dq_maxpool_output = AddQDQNodePair(builder, conv_output, .0039f, 7); + auto* dq_maxpool_output = AddQDQNodePair(builder, conv_output, .0039f, 7, use_contrib_qdq); auto* maxpool_output = builder.MakeIntermediate(); Node& pool_node = builder.AddNode("MaxPool", {dq_maxpool_output}, {maxpool_output}); std::vector pads((weights_shape.size() - 2) * 2, 1); @@ -211,26 +237,27 @@ TEST(QDQTransformerTests, ConvMaxPoolReshape_Int8) { pool_node.AddAttribute("kernel_shape", kernel_shape); // add QDQ + Reshape - auto* dq_reshape_output = AddQDQNodePair(builder, maxpool_output, .0039f, 7); + auto* dq_reshape_output = AddQDQNodePair(builder, maxpool_output, .0039f, 7, use_contrib_qdq); auto* reshape_shape = builder.Make1DInitializer({-1}); auto* reshape_output = builder.MakeIntermediate(); builder.AddNode("Reshape", {dq_reshape_output, reshape_shape}, {reshape_output}); // add Q if constexpr (QDQIsInt8Allowed()) { - builder.AddQuantizeLinearNode(reshape_output, .0039f, 7, output_arg); + builder.AddQuantizeLinearNode(reshape_output, .0039f, 7, output_arg, use_contrib_qdq); } else { - builder.AddQuantizeLinearNode(reshape_output, .0039f, 135, output_arg); + builder.AddQuantizeLinearNode(reshape_output, .0039f, 135, output_arg, use_contrib_qdq); } }; auto check_graph = [&](InferenceSessionWrapper& session) { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); const std::vector expected_op_types_in_order{ - "QuantizeLinear", + qdq_keys.quantize_linear, "QLinearConv", "MaxPool", "Reshape"}; - const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph()); + const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph(), true); EXPECT_EQ(op_types_in_order, expected_op_types_in_order); }; @@ -240,91 +267,102 @@ TEST(QDQTransformerTests, ConvMaxPoolReshape_Int8) { test_case({1, 12, 37}, {32, 12, 5}); test_case({1, 23, 13, 13}, {30, 23, 3, 3}); test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}); + test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, true); // Use com.microsoft QDQ ops } #if (defined(_M_AMD64) && !defined(_M_ARM64EC)) || defined(_M_IX86) || defined(__x86_64__) || defined(__i386__) || !defined(DISABLE_CONTRIB_OPS) TEST(QDQTransformerTests, DQ_S8_to_U8) { - const std::vector& input_shape = {19, 37}; - const std::vector& weights_shape = {37, 23}; + auto test_case = [](bool use_contrib_qdq) { + const std::vector& input_shape = {19, 37}; + const std::vector& weights_shape = {37, 23}; - auto build_test_case = [&](ModelTestBuilder& builder) { - auto* input1_arg = builder.MakeInput(input_shape, -1.f, 1.f); + auto build_test_case = [&](ModelTestBuilder& builder) { + auto* input1_arg = builder.MakeInput(input_shape, -1.f, 1.f); - // Use full range weight values to expose avx2 u8s8 overflow problems - auto* weight = builder.MakeInitializer(weights_shape, -128, 127); - auto* output_arg = builder.MakeOutput(); + // Use full range weight values to expose avx2 u8s8 overflow problems + auto* weight = builder.MakeInitializer(weights_shape, -128, 127); + auto* output_arg = builder.MakeOutput(); - // add QDQ activation - typedef std::numeric_limits Input1Limits; - auto* dq1_output = AddQDQNodePair(builder, input1_arg, .039f, (int8_t)((Input1Limits::max() + Input1Limits::min()) / 2 + 1)); + // add QDQ activation + typedef std::numeric_limits Input1Limits; + auto* dq1_output = AddQDQNodePair(builder, input1_arg, .039f, + (int8_t)((Input1Limits::max() + Input1Limits::min()) / 2 + 1), + use_contrib_qdq); - // add DQ weight - auto* dq_w_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output); + // add DQ weight + auto* dq_w_output = builder.MakeIntermediate(); + builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output, use_contrib_qdq); - builder.AddNode("MatMul", {dq1_output, dq_w_output}, {output_arg}); - }; + builder.AddNode("MatMul", {dq1_output, dq_w_output}, {output_arg}); + }; - auto check_graph = [&](InferenceSessionWrapper& session) { - auto op_to_count = CountOpsInGraph(session.GetGraph()); - EXPECT_EQ(op_to_count["com.microsoft.MatMulIntegerToFloat"], 1); - EXPECT_EQ(op_to_count["MatMul"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); - }; + auto check_graph = [&](InferenceSessionWrapper& session) { + auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); + EXPECT_EQ(op_to_count["com.microsoft.MatMulIntegerToFloat"], 1); + EXPECT_EQ(op_to_count["MatMul"], 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); + }; + + auto add_session_options = [&](SessionOptions& so) { + ASSERT_STATUS_OK(so.config_options.AddConfigEntry( + kOrtSessionOptionsAvx2PrecisionMode, "1")); + }; - auto add_session_options = [&](SessionOptions& so) { - ASSERT_STATUS_OK(so.config_options.AddConfigEntry( - kOrtSessionOptionsAvx2PrecisionMode, "1")); + TransformerTester(build_test_case, + check_graph, + TransformerLevel::Level1, + TransformerLevel::Level2, + 12 /*opset_version*/, + 0.01 /*per_sample_tolerance*/, + 0.01 /*relative_per_sample_tolerance*/, + nullptr, add_session_options); + TransformerTester(build_test_case, + check_graph, + TransformerLevel::Level1, + TransformerLevel::Level2, + 18 /*opset_version*/, + 0.01 /*per_sample_tolerance*/, + 0.01 /*relative_per_sample_tolerance*/, + nullptr, add_session_options); + TransformerTester(build_test_case, + check_graph, + TransformerLevel::Level1, + TransformerLevel::Level2, + 19 /*opset_version*/, + 0.01 /*per_sample_tolerance*/, + 0.01 /*relative_per_sample_tolerance*/, + nullptr, add_session_options); }; - TransformerTester(build_test_case, - check_graph, - TransformerLevel::Level1, - TransformerLevel::Level2, - 12 /*opset_version*/, - 0.01 /*per_sample_tolerance*/, - 0.01 /*relative_per_sample_tolerance*/, - nullptr, add_session_options); - TransformerTester(build_test_case, - check_graph, - TransformerLevel::Level1, - TransformerLevel::Level2, - 18 /*opset_version*/, - 0.01 /*per_sample_tolerance*/, - 0.01 /*relative_per_sample_tolerance*/, - nullptr, add_session_options); - TransformerTester(build_test_case, - check_graph, - TransformerLevel::Level1, - TransformerLevel::Level2, - 19 /*opset_version*/, - 0.01 /*per_sample_tolerance*/, - 0.01 /*relative_per_sample_tolerance*/, - nullptr, add_session_options); + test_case(false); // Use ONNX QDQ ops + test_case(true); // Use com.microsoft QDQ ops } #endif // Only for X64 with contrib ops enabled template void QDQTransformerAveragePoolTests() { - auto test_case = [&](const std::vector& input_shape) { + auto test_case = [&](const std::vector& input_shape, bool use_contrib_qdq = false) { auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); if constexpr (std::is_same::value) { EXPECT_EQ(op_to_count["com.microsoft.QLinearAveragePool"], 1); EXPECT_EQ(op_to_count["AveragePool"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); } else { EXPECT_EQ(op_to_count["com.microsoft.QLinearAveragePool"], 0); EXPECT_EQ(op_to_count["AveragePool"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 2); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 2); } }; - TransformerTester(BuildQDQAveragePoolTestCase(input_shape), + TransformerTester(BuildQDQAveragePoolTestCase(input_shape, 0 /*count_include_pad*/, + use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -332,7 +370,8 @@ void QDQTransformerAveragePoolTests() { 0.01 /*per_sample_tolerance*/, 0.01 /*relative_per_sample_tolerance*/, std::make_unique(QDQIsInt8Allowed())); - TransformerTester(BuildQDQAveragePoolTestCase(input_shape), + TransformerTester(BuildQDQAveragePoolTestCase(input_shape, 0 /*count_include_pad*/, + use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -346,6 +385,7 @@ void QDQTransformerAveragePoolTests() { test_case({1, 12, 37}); test_case({1, 23, 13, 13}); test_case({1, 22, 11, 13, 15}); + test_case({1, 12, 37}, true); // Use com.microsoft QDQ ops } TEST(QDQTransformerTests, AveragePool_S8S8) { @@ -366,23 +406,24 @@ TEST(QDQTransformerTests, AveragePool_U8S8) { template void QDQTransformerGlobalAveragePoolTests() { - auto test_case = [&](const std::vector& input_shape) { + auto test_case = [&](const std::vector& input_shape, bool use_contrib_qdq = false) { auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); if constexpr (std::is_same::value) { EXPECT_EQ(op_to_count["com.microsoft.QLinearGlobalAveragePool"], 1); EXPECT_EQ(op_to_count["GlobalAveragePool"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); } else { EXPECT_EQ(op_to_count["com.microsoft.QLinearGlobalAveragePool"], 0); EXPECT_EQ(op_to_count["GlobalAveragePool"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 2); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 2); } }; - TransformerTester(BuildQDQGlobalAveragePoolTestCase(input_shape), + TransformerTester(BuildQDQGlobalAveragePoolTestCase(input_shape, use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -390,7 +431,7 @@ void QDQTransformerGlobalAveragePoolTests() { 0.01 /*per_sample_tolerance*/, 0.01 /*relative_per_sample_tolerance*/, std::make_unique(QDQIsInt8Allowed())); - TransformerTester(BuildQDQGlobalAveragePoolTestCase(input_shape), + TransformerTester(BuildQDQGlobalAveragePoolTestCase(input_shape, use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -404,6 +445,7 @@ void QDQTransformerGlobalAveragePoolTests() { test_case({1, 12, 37}); test_case({1, 23, 13, 13}); test_case({1, 22, 11, 13, 15}); + test_case({1, 12, 37}, true); // Use com.microsoft QDQ ops } TEST(QDQTransformerTests, GlobalAveragePool_S8S8) { @@ -424,24 +466,25 @@ TEST(QDQTransformerTests, GlobalAveragePool_U8S8) { template void QDQTransformerBinaryOpTests(const std::string& op_type) { - auto test_case = [&](const std::vector& input_shape) { + auto test_case = [&](const std::vector& input_shape, bool use_contrib_qdq = false) { auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); if (std::is_same::value && std::is_same::value) { EXPECT_EQ(op_to_count["com.microsoft.QLinear" + op_type], 1); EXPECT_EQ(op_to_count[op_type], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); } else { EXPECT_EQ(op_to_count["com.microsoft.QLinear" + op_type], 0); EXPECT_EQ(op_to_count[op_type], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 3); - EXPECT_EQ(op_to_count["DequantizeLinear"], 3); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 3); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 3); } }; - TransformerTester(BuildBinaryOpTestCase(input_shape, op_type), + TransformerTester(BuildBinaryOpTestCase(input_shape, op_type, use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -449,7 +492,7 @@ void QDQTransformerBinaryOpTests(const std::string& op_type) { 0.01 /*per_sample_tolerance*/, 0.01 /*relative_per_sample_tolerance*/, std::make_unique(QDQIsInt8Allowed())); - TransformerTester(BuildBinaryOpTestCase(input_shape, op_type), + TransformerTester(BuildBinaryOpTestCase(input_shape, op_type, use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -457,7 +500,7 @@ void QDQTransformerBinaryOpTests(const std::string& op_type) { 0.01 /*per_sample_tolerance*/, 0.01 /*relative_per_sample_tolerance*/, std::make_unique(QDQIsInt8Allowed())); - TransformerTester(BuildBinaryOpTestCase(input_shape, op_type), + TransformerTester(BuildBinaryOpTestCase(input_shape, op_type, use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -470,6 +513,7 @@ void QDQTransformerBinaryOpTests(const std::string& op_type) { test_case({1, 12, 37}); test_case({1, 23, 13, 13}); test_case({1, 22, 11, 13, 15}); + test_case({1, 12, 37}, true); // Use com.microsoft QDQ ops } TEST(QDQTransformerTests, Add) { @@ -502,7 +546,8 @@ TEST(QDQTransformerTests, Mul_Have_Different_Types) { template void QDQTransformerMatMulTests(bool has_output_q) { - auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape) { + auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape, + bool use_contrib_qdq = false) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input1_arg = builder.MakeInput(input1_shape, -1.f, 1.f); auto* input2_arg = builder.MakeInput(input2_shape, -1.f, 1.f); @@ -518,11 +563,11 @@ void QDQTransformerMatMulTests(bool has_output_q) { builder.AddQuantizeLinearNode(input1_arg, .039f, (Input1Limits::max() + Input1Limits::min()) / 2 + 1, - q1_output); + q1_output, use_contrib_qdq); builder.AddDequantizeLinearNode(q1_output, .039f, (Input2Limits::max() + Input1Limits::min()) / 2 + 1, - dq1_output); + dq1_output, use_contrib_qdq); // add QDQ 2 auto* q2_output = builder.MakeIntermediate(); @@ -530,11 +575,11 @@ void QDQTransformerMatMulTests(bool has_output_q) { builder.AddQuantizeLinearNode(input2_arg, .04f, (Input2Limits::max() + Input2Limits::min()) / 2 + 1, - q2_output); + q2_output, use_contrib_qdq); builder.AddDequantizeLinearNode(q2_output, .04f, (Input2Limits::max() + Input2Limits::min()) / 2 + 1, - dq2_output); + dq2_output, use_contrib_qdq); if (has_output_q) { // add binary operator @@ -546,11 +591,11 @@ void QDQTransformerMatMulTests(bool has_output_q) { builder.AddQuantizeLinearNode(matmul_op_output, .039f, (OutputTypeLimits::max() + OutputTypeLimits::min()) / 2 + 1, - q3_output); + q3_output, use_contrib_qdq); builder.AddDequantizeLinearNode(q3_output, .039f, (OutputTypeLimits::max() + OutputTypeLimits::min()) / 2 + 1, - output_arg); + output_arg, use_contrib_qdq); } else { builder.AddNode("MatMul", {dq1_output, dq2_output}, {output_arg}); } @@ -558,32 +603,33 @@ void QDQTransformerMatMulTests(bool has_output_q) { auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); if (has_output_q) { if constexpr (std::is_same::value && (std::is_same::value || QDQIsInt8Allowed() && std::is_same::value)) { EXPECT_EQ(op_to_count["QLinearMatMul"], 1); EXPECT_EQ(op_to_count["MatMul"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); } else { EXPECT_EQ(op_to_count["QLinearMatMul"], 0); EXPECT_EQ(op_to_count["MatMul"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 3); - EXPECT_EQ(op_to_count["DequantizeLinear"], 3); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 3); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 3); } } else { if constexpr (std::is_same::value || (QDQIsInt8Allowed() && std::is_same::value)) { EXPECT_EQ(op_to_count["com.microsoft.MatMulIntegerToFloat"], 1); EXPECT_EQ(op_to_count["MatMul"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); } else { EXPECT_EQ(op_to_count["com.microsoft.MatMulIntegerToFloat"], 0); EXPECT_EQ(op_to_count["MatMul"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 2); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 2); } } }; @@ -617,6 +663,7 @@ void QDQTransformerMatMulTests(bool has_output_q) { test_case({1, 2, 2}, {1, 2, 4}); test_case({1, 23, 13, 13}, {13, 13}); test_case({1, 22, 11, 13, 15}, {1, 22, 11, 15, 15}); + test_case({1, 2, 2}, {1, 2, 4}, true); // Use com.microsoft QDQ ops } TEST(QDQTransformerTests, MatMul_U8U8U8) { @@ -661,7 +708,8 @@ TEST(QDQTransformerTests, MatMul_S8S8U8) { template void QDQTransformerGemmTests(bool has_output_q, bool has_bias, bool beta_not_one = false) { - auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape) { + auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape, + bool use_contrib_qdq = false) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input1_arg = builder.MakeInput(input1_shape, -1.f, 1.f); auto* input2_arg = builder.MakeInput(input2_shape, -1.f, 1.f); @@ -679,11 +727,11 @@ void QDQTransformerGemmTests(bool has_output_q, bool has_bias, bool beta_not_one builder.AddQuantizeLinearNode(input1_arg, .039f, (Input1Limits::max() + Input1Limits::min()) / 2 + 1, - q1_output); + q1_output, use_contrib_qdq); builder.AddDequantizeLinearNode(q1_output, .039f, (Input2Limits::max() + Input1Limits::min()) / 2 + 1, - dq1_output); + dq1_output, use_contrib_qdq); input_args.push_back(dq1_output); @@ -693,11 +741,11 @@ void QDQTransformerGemmTests(bool has_output_q, bool has_bias, bool beta_not_one builder.AddQuantizeLinearNode(input2_arg, .04f, (Input2Limits::max() + Input2Limits::min()) / 2 + 1, - q2_output); + q2_output, use_contrib_qdq); builder.AddDequantizeLinearNode(q2_output, .04f, (Input2Limits::max() + Input2Limits::min()) / 2 + 1, - dq2_output); + dq2_output, use_contrib_qdq); input_args.push_back(dq2_output); if (has_bias) { @@ -705,7 +753,7 @@ void QDQTransformerGemmTests(bool has_output_q, bool has_bias, bool beta_not_one auto* bias = builder.MakeInitializer({input2_shape[1]}, static_cast(0), static_cast(127)); builder.AddDequantizeLinearNode(bias, 0.00156f, 0, - dq_bias_output); + dq_bias_output, use_contrib_qdq); input_args.push_back(dq_bias_output); } @@ -720,11 +768,11 @@ void QDQTransformerGemmTests(bool has_output_q, bool has_bias, bool beta_not_one builder.AddQuantizeLinearNode(gemm_op_output, .039f, (OutputTypeLimits::max() + OutputTypeLimits::min()) / 2 + 1, - q3_output); + q3_output, use_contrib_qdq); builder.AddDequantizeLinearNode(q3_output, .039f, (OutputTypeLimits::max() + OutputTypeLimits::min()) / 2 + 1, - output_arg); + output_arg, use_contrib_qdq); } else { gemm_node = &builder.AddNode("Gemm", input_args, {output_arg}); } @@ -736,12 +784,13 @@ void QDQTransformerGemmTests(bool has_output_q, bool has_bias, bool beta_not_one auto check_binary_op_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); if ((!has_output_q || std::is_same_v)&&(!has_bias || (std::is_same_v && !beta_not_one)) && (std::is_same_v || std::is_same_v)) { EXPECT_EQ(op_to_count["com.microsoft.QGemm"], 1); EXPECT_EQ(op_to_count["Gemm"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], has_output_q ? 1 : 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], has_output_q ? 1 : 0); } else { int q_count = 2; // Q for A and B int dq_count = 2; // DQ for A and B @@ -754,8 +803,8 @@ void QDQTransformerGemmTests(bool has_output_q, bool has_bias, bool beta_not_one } EXPECT_EQ(op_to_count["com.microsoft.QGemm"], 0); EXPECT_EQ(op_to_count["Gemm"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], q_count); - EXPECT_EQ(op_to_count["DequantizeLinear"], dq_count); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], q_count); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], dq_count); } }; @@ -787,6 +836,7 @@ void QDQTransformerGemmTests(bool has_output_q, bool has_bias, bool beta_not_one test_case({2, 2}, {2, 4}); test_case({13, 15}, {15, 15}); + test_case({2, 2}, {2, 4}, true); // Use com.microsoft QDQ ops } template @@ -842,7 +892,8 @@ TEST(QDQTransformerTests, Gemm_S8S8U8) { } TEST(QDQTransformerTests, Gather) { - auto test_case = [&](const std::vector& input1_shape, const std::vector& weights_shape) { + auto test_case = [&](const std::vector& input1_shape, const std::vector& weights_shape, + bool use_contrib_qdq = false) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input1_arg = builder.MakeInput(input1_shape, 0, weights_shape[0] - 1); auto* output_arg = builder.MakeOutput(); @@ -851,24 +902,26 @@ TEST(QDQTransformerTests, Gather) { auto* weight = builder.MakeInitializer(weights_shape, -128, 127); auto* dq_w_output = builder.MakeIntermediate(); auto* gather_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(weight, .003f, 1, dq_w_output); + builder.AddDequantizeLinearNode(weight, .003f, 1, dq_w_output, use_contrib_qdq); builder.AddNode("Gather", {dq_w_output, input1_arg}, {gather_output}); // add Q - builder.AddQuantizeLinearNode(gather_output, .003f, 1, output_arg); + builder.AddQuantizeLinearNode(gather_output, .003f, 1, output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["Gather"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 0); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 0); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); }; TransformerTester(build_test_case, check_graph, TransformerLevel::Level1, TransformerLevel::Level2); }; test_case({12, 37}, {24, 12}); + test_case({12, 37}, {24, 12}, true); // Use com.microsoft QDQ ops } TEST(QDQTransformerTests, DoubleQDQ) { @@ -884,23 +937,33 @@ TEST(QDQTransformerTests, DoubleQDQ) { constexpr float good_float_point_2 = 8.0f; constexpr float bad_float_point = 1.11f; - std::function expect_succeed = [&](InferenceSessionWrapper& session) { - auto op_to_count = CountOpsInGraph(session.GetGraph()); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + auto expect_succeed = [](bool use_contrib_qdq) -> std::function { + return [use_contrib_qdq](InferenceSessionWrapper& session) { + auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); + }; }; - std::function expect_fail = [&](InferenceSessionWrapper& session) { - auto op_to_count = CountOpsInGraph(session.GetGraph()); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 2); + + auto expect_fail = [](bool use_contrib_qdq) -> std::function { + return [use_contrib_qdq](InferenceSessionWrapper& session) { + auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 2); + }; }; auto test_case_all_u8 = [&](bool succeed, uint8_t zp_1, uint8_t zp_2, uint8_t zp_3, uint8_t zp_4, - float scale_1, float scale_2, float scale_3, float scale_4) { + float scale_1, float scale_2, float scale_3, float scale_4, + bool use_contrib_qdq = false) { TransformerTester( - BuildDoubleQDQTestCases(zp_1, zp_2, zp_3, zp_4, scale_1, scale_2, scale_3, scale_4), - succeed ? expect_succeed : expect_fail, + BuildDoubleQDQTestCases(zp_1, zp_2, zp_3, zp_4, + scale_1, scale_2, scale_3, scale_4, + use_contrib_qdq), + succeed ? expect_succeed(use_contrib_qdq) : expect_fail(use_contrib_qdq), TransformerLevel::Default, TransformerLevel::Level1, 12, @@ -910,26 +973,33 @@ TEST(QDQTransformerTests, DoubleQDQ) { auto test_case_all_s8 = [&](bool succeed, int8_t zp_1, int8_t zp_2, int8_t zp_3, int8_t zp_4, - float scale_1, float scale_2, float scale_3, float scale_4) { + float scale_1, float scale_2, float scale_3, float scale_4, + bool use_contrib_qdq = false) { TransformerTester( - BuildDoubleQDQTestCases(zp_1, zp_2, zp_3, zp_4, scale_1, scale_2, scale_3, scale_4), - succeed ? expect_succeed : expect_fail, + BuildDoubleQDQTestCases(zp_1, zp_2, zp_3, zp_4, + scale_1, scale_2, scale_3, scale_4, + use_contrib_qdq), + succeed ? expect_succeed(use_contrib_qdq) : expect_fail(use_contrib_qdq), TransformerLevel::Default, TransformerLevel::Level1, 12, (scale_1 + scale_3) / 2, 0.01); TransformerTester( - BuildDoubleQDQTestCases(zp_1, zp_2, zp_3, zp_4, scale_1, scale_2, scale_3, scale_4), - succeed ? expect_succeed : expect_fail, + BuildDoubleQDQTestCases(zp_1, zp_2, zp_3, zp_4, + scale_1, scale_2, scale_3, scale_4, + use_contrib_qdq), + succeed ? expect_succeed(use_contrib_qdq) : expect_fail(use_contrib_qdq), TransformerLevel::Default, TransformerLevel::Level1, 18, (scale_1 + scale_3) / 2, 0.01); TransformerTester( - BuildDoubleQDQTestCases(zp_1, zp_2, zp_3, zp_4, scale_1, scale_2, scale_3, scale_4), - succeed ? expect_succeed : expect_fail, + BuildDoubleQDQTestCases(zp_1, zp_2, zp_3, zp_4, + scale_1, scale_2, scale_3, scale_4, + use_contrib_qdq), + succeed ? expect_succeed(use_contrib_qdq) : expect_fail(use_contrib_qdq), TransformerLevel::Default, TransformerLevel::Level1, 19, @@ -938,144 +1008,203 @@ TEST(QDQTransformerTests, DoubleQDQ) { }; auto test_case_2u8_2s8_failed = [&](uint8_t zp_1, uint8_t zp_2, int8_t zp_3, int8_t zp_4, - float scale_1, float scale_2, float scale_3, float scale_4) { + float scale_1, float scale_2, float scale_3, float scale_4, + bool use_contrib_qdq = false) { TransformerTester( - BuildDoubleQDQTestCases(zp_1, zp_2, zp_3, zp_4, scale_1, scale_2, scale_3, scale_4), - expect_fail, + BuildDoubleQDQTestCases(zp_1, zp_2, zp_3, zp_4, + scale_1, scale_2, scale_3, scale_4, + use_contrib_qdq), + expect_fail(use_contrib_qdq), TransformerLevel::Default, TransformerLevel::Level1); }; // all unsigned type - test_case_all_u8(true, good_u8_1, good_u8_1, good_u8_2, good_u8_2, good_float_point_1, good_float_point_1, good_float_point_2, good_float_point_2); + test_case_all_u8(true, good_u8_1, good_u8_1, good_u8_2, good_u8_2, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2); + test_case_all_u8(true, good_u8_1, good_u8_1, good_u8_2, good_u8_2, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2, true); // Use com.microsoft QDQ ops + // all signed type - test_case_all_s8(true, good_s8_1, good_s8_1, good_s8_2, good_s8_2, good_float_point_1, good_float_point_1, good_float_point_2, good_float_point_2); + test_case_all_s8(true, good_s8_1, good_s8_1, good_s8_2, good_s8_2, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2); + test_case_all_s8(true, good_s8_1, good_s8_1, good_s8_2, good_s8_2, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2, true); // Use com.microsoft QDQ ops + // 2 signed, 2 unsigned - test_case_2u8_2s8_failed(good_u8_1, good_u8_1, good_s8_2, good_s8_2, good_float_point_1, good_float_point_1, good_float_point_2, good_float_point_2); + test_case_2u8_2s8_failed(good_u8_1, good_u8_1, good_s8_2, good_s8_2, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2); + test_case_2u8_2s8_failed(good_u8_1, good_u8_1, good_s8_2, good_s8_2, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2, true); // Use com.microsoft QDQ ops + // different zero point within a pair - test_case_all_u8(false, good_u8_1, bad_u8, good_u8_2, good_u8_2, good_float_point_1, good_float_point_1, good_float_point_2, good_float_point_2); - test_case_all_u8(false, good_u8_1, good_u8_1, good_u8_2, bad_u8, good_float_point_1, good_float_point_1, good_float_point_2, good_float_point_2); - test_case_all_s8(false, good_s8_1, bad_s8, good_s8_2, good_s8_2, good_float_point_1, good_float_point_1, good_float_point_2, good_float_point_2); - test_case_all_s8(false, good_s8_1, good_s8_1, good_s8_2, bad_s8, good_float_point_1, good_float_point_1, good_float_point_2, good_float_point_2); + test_case_all_u8(false, good_u8_1, bad_u8, good_u8_2, good_u8_2, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2); + test_case_all_u8(false, good_u8_1, bad_u8, good_u8_2, good_u8_2, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2, true); // Use com.microsoft QDQ ops + test_case_all_u8(false, good_u8_1, good_u8_1, good_u8_2, bad_u8, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2); + test_case_all_u8(false, good_u8_1, good_u8_1, good_u8_2, bad_u8, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2, true); // Use com.microsoft QDQ ops + test_case_all_s8(false, good_s8_1, bad_s8, good_s8_2, good_s8_2, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2); + test_case_all_s8(false, good_s8_1, bad_s8, good_s8_2, good_s8_2, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2, true); // Use com.microsoft QDQ ops + test_case_all_s8(false, good_s8_1, good_s8_1, good_s8_2, bad_s8, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2); + test_case_all_s8(false, good_s8_1, good_s8_1, good_s8_2, bad_s8, good_float_point_1, good_float_point_1, + good_float_point_2, good_float_point_2, true); // Use com.microsoft QDQ ops + // different scale within a pair - test_case_all_u8(false, good_u8_1, good_u8_1, good_u8_2, good_u8_2, good_float_point_1, bad_float_point, good_float_point_2, good_float_point_2); - test_case_all_u8(false, good_u8_1, good_u8_1, good_u8_2, good_u8_2, good_float_point_1, good_float_point_1, bad_float_point, good_float_point_2); + test_case_all_u8(false, good_u8_1, good_u8_1, good_u8_2, good_u8_2, good_float_point_1, bad_float_point, + good_float_point_2, good_float_point_2); + test_case_all_u8(false, good_u8_1, good_u8_1, good_u8_2, good_u8_2, good_float_point_1, bad_float_point, + good_float_point_2, good_float_point_2, true); // Use com.microsoft QDQ ops + test_case_all_u8(false, good_u8_1, good_u8_1, good_u8_2, good_u8_2, good_float_point_1, good_float_point_1, + bad_float_point, good_float_point_2); + test_case_all_u8(false, good_u8_1, good_u8_1, good_u8_2, good_u8_2, good_float_point_1, good_float_point_1, + bad_float_point, good_float_point_2, true); // Use com.microsoft QDQ ops } TEST(QDQTransformerTests, DoubleQDQ_Without_Last_Node_Being_Output) { - auto test_case = [&](int output_index, int expected_Q_count, int expected_DQ_count) { + auto test_case = [&](int output_index, int expected_Q_count, int expected_DQ_count, + bool use_contrib_qdq = false) { auto graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); - EXPECT_EQ(op_to_count["QuantizeLinear"], expected_Q_count); - EXPECT_EQ(op_to_count["DequantizeLinear"], expected_DQ_count); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], expected_Q_count); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], expected_DQ_count); }; TransformerTester( - BuildDoubleQDQWithoutLastOutput(output_index), + BuildDoubleQDQWithoutLastOutput(output_index, use_contrib_qdq), graph, TransformerLevel::Default, TransformerLevel::Level1); }; + constexpr bool use_contrib_qdq = true; // For readability. + test_case(0, 2, 2); - test_case(1, 2, 3); // EnsureUniqueDQForNodeUnit will duplicate first DQ, so expect one more (3) + test_case(0, 2, 2, use_contrib_qdq); + test_case(1, 2, 3); // EnsureUniqueDQForNodeUnit will duplicate first DQ, so expect one more (3) + test_case(1, 2, 3, use_contrib_qdq); // EnsureUniqueDQForNodeUnit will duplicate first DQ, so expect one more (3) test_case(2, 2, 2); + test_case(2, 2, 2, use_contrib_qdq); test_case(3, 1, 1); + test_case(3, 1, 1, use_contrib_qdq); } + // Because split isn't one the supported ops, this will stay the same TEST(QDQTransformerTests, Split) { - auto test_case = [&](const std::vector& input_shape, const int64_t& axis) { + auto test_case = [&](const std::vector& input_shape, const int64_t& axis, + bool use_contrib_qdq = false) { auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["Split"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 0); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 0); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); }; - TransformerTester(BuildQDQSplitTestCase(input_shape, axis), + TransformerTester(BuildQDQSplitTestCase(input_shape, axis, use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, {12, 18, 19}); }; test_case({6, 18, 54}, 0); + test_case({6, 18, 54}, 0, true); // Use com.microsoft QDQ ops } // Because split isn't one the supported ops, this will stay the same TEST(QDQTransformerTests, Split_without_IdenticalChildrenConsolidation) { - auto test_case = [&](const std::vector& input_shape, const int64_t& axis) { + auto test_case = [&](const std::vector& input_shape, const int64_t& axis, + bool use_contrib_qdq = false) { auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["Split"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 3); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 3); }; - TransformerTester(BuildConsolidationTestCase(input_shape, axis), + TransformerTester(BuildConsolidationTestCase(input_shape, axis, use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, {12, 18, 19}, {}, {}, nullptr, {}, {"IdenticalChildrenConsolidation"}); }; test_case({6, 18, 54}, 0); + test_case({6, 18, 54}, 0, true); // Use com.microsoft QDQ ops } TEST(QDQTransformerTests, Split_with_IdenticalChildrenConsolidation) { - auto test_case = [&](const std::vector& input_shape, const int64_t& axis) { + auto test_case = [&](const std::vector& input_shape, const int64_t& axis, + bool use_contrib_qdq = false) { auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["Split"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 3); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 3); }; - TransformerTester(BuildConsolidationTestCase(input_shape, axis), + TransformerTester(BuildConsolidationTestCase(input_shape, axis, use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, {12, 18, 19}); }; test_case({6, 18, 54}, 0); + test_case({6, 18, 54}, 0, true); // Use com.microsoft QDQ ops } TEST(QDQTransformerTests, Where) { - auto test_case = [&](const std::vector& cond_shape, const std::vector& x_shape, const std::vector& y_shape) { + auto test_case = [&](const std::vector& cond_shape, const std::vector& x_shape, + const std::vector& y_shape, bool use_contrib_qdq = false) { auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["com.microsoft.QLinearWhere"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 0); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 0); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); }; - TransformerTester(BuildQDQWhereTestCase(cond_shape, x_shape, y_shape), + TransformerTester(BuildQDQWhereTestCase(cond_shape, x_shape, y_shape, use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2); }; test_case({1}, {1}, {1}); + test_case({1}, {1}, {1}, true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, Transpose) { - auto test_case = [&](const std::vector& input_shape, const std::vector& perms) { + auto test_case = [&](const std::vector& input_shape, const std::vector& perms, + bool use_contrib_qdq = false) { auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["Transpose"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 0); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 0); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); }; - TransformerTester(BuildQDQTransposeTestCase(input_shape, perms), + TransformerTester(BuildQDQTransposeTestCase(input_shape, perms, use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2); }; test_case({2, 13, 12, 37}, {0, 3, 1, 2}); + test_case({2, 13, 12, 37}, {0, 3, 1, 2}, true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, Transpose_No_Fusion) { - auto test_case = [&](const std::vector& input1_shape, const std::vector& perms) { + auto test_case = [&](const std::vector& input1_shape, const std::vector& perms, + bool use_contrib_qdq = false) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input1_arg = builder.MakeInput(input1_shape, -128, 127); auto* output_arg = builder.MakeOutput(); // add DQ auto* dq_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input1_arg, .003f, 1, dq_output); + builder.AddDequantizeLinearNode(input1_arg, .003f, 1, dq_output, use_contrib_qdq); // add Transpose auto* transpose_output = builder.MakeOutput(); // transpose output is graph output @@ -1083,32 +1212,42 @@ TEST(QDQTransformerTests, Transpose_No_Fusion) { transpose_node.AddAttribute("perm", perms); // add Q - builder.AddQuantizeLinearNode(transpose_output, .003f, 1, output_arg); + builder.AddQuantizeLinearNode(transpose_output, .003f, 1, output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); }; TransformerTester(build_test_case, check_graph, TransformerLevel::Level1, TransformerLevel::Level2); }; test_case({2, 13, 12, 37}, {0, 3, 1, 2}); + test_case({2, 13, 12, 37}, {0, 3, 1, 2}, true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, Resize) { auto test_case = [&](const std::vector& input1_shape, - const std::vector& sizes_shape) { + const std::vector& sizes_shape, + bool use_contrib_qdq = false) { auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["Resize"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 0); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 0); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); }; - TransformerTester(BuildQDQResizeTestCase(input1_shape, sizes_shape), + TransformerTester(BuildQDQResizeTestCase(input1_shape, + sizes_shape, + "nearest", // mode + "half_pixel", // coordinate_transformation_mode + "round_prefer_floor", // nearest_mode + false, // add_dq_output_float + use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2); @@ -1116,13 +1255,15 @@ TEST(QDQTransformerTests, Resize) { RandomValueGenerator rand_gen{optional{2345}}; test_case({2, 13, 12, 37}, rand_gen.Uniform(std::vector{4}, 1, 16)); + test_case({2, 13, 12, 37}, rand_gen.Uniform(std::vector{4}, 1, 16), true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, Resize_No_Fusion) { auto test_case = [&](const std::vector& input_shape, const std::vector& sizes_shape, const std::vector& concat_input2_shape, - const int64_t axis) { + const int64_t axis, + bool use_contrib_qdq = false) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, std::numeric_limits::min(), @@ -1134,7 +1275,7 @@ TEST(QDQTransformerTests, Resize_No_Fusion) { // add DQ auto* dq_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input_arg, .003f, 1, dq_output); + builder.AddDequantizeLinearNode(input_arg, .003f, 1, dq_output, use_contrib_qdq); // add Resize auto* resize_output = builder.MakeIntermediate(); @@ -1151,15 +1292,16 @@ TEST(QDQTransformerTests, Resize_No_Fusion) { concat_node.AddAttribute("axis", axis); // add Q - builder.AddQuantizeLinearNode(resize_output, .003f, 1, output_arg); + builder.AddQuantizeLinearNode(resize_output, .003f, 1, output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["Resize"], 1); EXPECT_EQ(op_to_count["Concat"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); }; TransformerTester(build_test_case, check_graph, @@ -1168,11 +1310,13 @@ TEST(QDQTransformerTests, Resize_No_Fusion) { }; test_case({1, 8, 64, 64}, {4}, {1, 4, 128, 128}, 1); + test_case({1, 8, 64, 64}, {4}, {1, 4, 128, 128}, 1, true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, ResizeReshapeSqueezeUnsqueeze) { auto test_case = [&](const std::vector& input_shape, - const std::vector& sizes_shape) { + const std::vector& sizes_shape, + bool use_contrib_qdq = false) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, std::numeric_limits::min(), @@ -1182,38 +1326,39 @@ TEST(QDQTransformerTests, ResizeReshapeSqueezeUnsqueeze) { auto* sizes = builder.MakeInitializer(sizes_shape, {1, 2, 52, 82}); // add QDQ + Resize - auto* qdq_input = AddQDQNodePair(builder, input_arg, .003f, 1); + auto* qdq_input = AddQDQNodePair(builder, input_arg, .003f, 1, use_contrib_qdq); auto* resize_output = builder.MakeIntermediate(); builder.AddNode("Resize", {qdq_input, roi, scales, sizes}, {resize_output}); // add QDQ + Reshape - auto* qdq_resize_output = AddQDQNodePair(builder, resize_output, .003f, 1); + auto* qdq_resize_output = AddQDQNodePair(builder, resize_output, .003f, 1, use_contrib_qdq); auto* reshape_shape = builder.Make1DInitializer({1, 2, 52, 82}); auto* reshape_output = builder.MakeIntermediate(); builder.AddNode("Reshape", {qdq_resize_output, reshape_shape}, {reshape_output}); // add QDQ + Squeeze - auto* qdq_squeeze_output = AddQDQNodePair(builder, reshape_output, .003f, 1); + auto* qdq_squeeze_output = AddQDQNodePair(builder, reshape_output, .003f, 1, use_contrib_qdq); auto* squeeze_axes = builder.Make1DInitializer({0}); auto* squeeze_output = builder.MakeIntermediate(); builder.AddNode("Squeeze", {qdq_squeeze_output, squeeze_axes}, {squeeze_output}); // add QDQ + Unsqueeze - auto* qdq_unsqueeze_output = AddQDQNodePair(builder, squeeze_output, .003f, 1); + auto* qdq_unsqueeze_output = AddQDQNodePair(builder, squeeze_output, .003f, 1, use_contrib_qdq); auto* unsqueeze_axes = builder.Make1DInitializer({0}); auto* unsqueeze_output = builder.MakeIntermediate(); builder.AddNode("Unsqueeze", {qdq_unsqueeze_output, unsqueeze_axes}, {unsqueeze_output}); // add QDQ - AddQDQNodePairWithOutputAsGraphOutput(builder, unsqueeze_output, .003f, 1); + AddQDQNodePairWithOutputAsGraphOutput(builder, unsqueeze_output, .003f, 1, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["Resize"], 1); EXPECT_EQ(op_to_count["Reshape"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); }; TransformerTester(build_test_case, check_graph, @@ -1228,13 +1373,15 @@ TEST(QDQTransformerTests, ResizeReshapeSqueezeUnsqueeze) { }; test_case({1, 2, 26, 42}, {4}); + test_case({1, 2, 26, 42}, {4}, true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, ArgMax) { auto test_case = [&](const std::vector& input_shape, int axis, int keepdims, - int select_last_index) { + int select_last_index, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, std::numeric_limits::min(), @@ -1243,7 +1390,7 @@ TEST(QDQTransformerTests, ArgMax) { // add DQ auto* dq_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input_arg, .003f, 1, dq_output); + builder.AddDequantizeLinearNode(input_arg, .003f, 1, dq_output, use_contrib_qdq); // add ArgMax Node& argmax_node = builder.AddNode("ArgMax", {dq_output}, {output_arg}); @@ -1254,8 +1401,9 @@ TEST(QDQTransformerTests, ArgMax) { auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["ArgMax"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); }; TransformerTester(build_test_case, check_graph, @@ -1268,13 +1416,15 @@ TEST(QDQTransformerTests, ArgMax) { /* opset_version */ 19); }; - test_case({2, 13, 12, 37}, 1, 0, 0); - test_case({2, 13, 12, 37}, 0, 1, 0); - test_case({2, 13, 12, 37}, 0, 0, 1); + test_case({2, 13, 12, 37}, 1, 0, 0, false /*use_contrib_qdq*/); + test_case({2, 13, 12, 37}, 1, 0, 0, true /*use_contrib_qdq*/); + test_case({2, 13, 12, 37}, 0, 1, 0, false /*use_contrib_qdq*/); + test_case({2, 13, 12, 37}, 0, 0, 1, false /*use_contrib_qdq*/); } TEST(QDQTransformerTests, QLinearMatMul) { - auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape) { + auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input1_arg = builder.MakeInput(input1_shape, -1.f, 1.f); auto* input2_arg = builder.MakeInput(input2_shape, -1.f, 1.f); @@ -1282,31 +1432,34 @@ TEST(QDQTransformerTests, QLinearMatMul) { // add QDQ + MatMul auto* matmul_output = builder.MakeIntermediate(); - auto* dq_matmul_output1 = AddQDQNodePair(builder, input1_arg, .004f, 129); - auto* dq_matmul_output2 = AddQDQNodePair(builder, input2_arg, .004f, 129); + auto* dq_matmul_output1 = AddQDQNodePair(builder, input1_arg, .004f, 129, use_contrib_qdq); + auto* dq_matmul_output2 = AddQDQNodePair(builder, input2_arg, .004f, 129, use_contrib_qdq); builder.AddNode("MatMul", {dq_matmul_output1, dq_matmul_output2}, {matmul_output}); // add Q - builder.AddQuantizeLinearNode(matmul_output, .0039f, 135, output_arg); + builder.AddQuantizeLinearNode(matmul_output, .0039f, 135, output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["QLinearMatMul"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); }; TransformerTester(build_test_case, check_graph, TransformerLevel::Level1, TransformerLevel::Level2); }; - test_case({12, 37}, {37, 12}); - test_case({23, 13, 13}, {13, 13}); - test_case({22, 11, 13, 15}, {15, 13}); + test_case({12, 37}, {37, 12}, false /*use_contrib_qdq*/); + test_case({12, 37}, {37, 12}, true /*use_contrib_qdq*/); + test_case({23, 13, 13}, {13, 13}, false /*use_contrib_qdq*/); + test_case({22, 11, 13, 15}, {15, 13}, false /*use_contrib_qdq*/); } TEST(QDQTransformerTests, MatMul_No_Fusion) { - auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape) { + auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input1_arg = builder.MakeInput(input1_shape, -1.f, 1.f); auto* input2_arg = builder.MakeInput(input2_shape, -1.f, 1.f); @@ -1314,31 +1467,34 @@ TEST(QDQTransformerTests, MatMul_No_Fusion) { // add QDQ + MatMul auto* matmul_output = builder.MakeIntermediate(); - auto* dq_matmul_output1 = AddQDQNodePair(builder, input1_arg, .004f, 129); + auto* dq_matmul_output1 = AddQDQNodePair(builder, input1_arg, .004f, 129, use_contrib_qdq); builder.AddNode("MatMul", {dq_matmul_output1, input2_arg}, {matmul_output}); // add Q - builder.AddQuantizeLinearNode(matmul_output, .0039f, 135, output_arg); + builder.AddQuantizeLinearNode(matmul_output, .0039f, 135, output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["MatMul"], 1); EXPECT_EQ(op_to_count["QLinearMatMul"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); }; TransformerTester(build_test_case, check_graph, TransformerLevel::Level1, TransformerLevel::Level2); }; - test_case({12, 37}, {37, 12}); - test_case({23, 13, 13}, {13, 13}); - test_case({22, 11, 13, 15}, {15, 13}); + test_case({12, 37}, {37, 12}, false /*use_contrib_qdq*/); + test_case({12, 37}, {37, 12}, true /*use_contrib_qdq*/); + test_case({23, 13, 13}, {13, 13}, false /*use_contrib_qdq*/); + test_case({22, 11, 13, 15}, {15, 13}, false /*use_contrib_qdq*/); } TEST(QDQTransformerTests, MatMul_1st_Input_Int8) { - auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape) { + auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input1_arg = builder.MakeInput(input1_shape, -128, 127); auto* input2_arg = builder.MakeInput(input2_shape, -1.f, 1.f); @@ -1346,35 +1502,38 @@ TEST(QDQTransformerTests, MatMul_1st_Input_Int8) { // add DQ with type int8 auto* dq_output_1 = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input1_arg, .004f, 1, dq_output_1); + builder.AddDequantizeLinearNode(input1_arg, .004f, 1, dq_output_1, use_contrib_qdq); // add QDQ + MatMul auto* matmul_output = builder.MakeIntermediate(); - auto* dq_matmul_output2 = AddQDQNodePair(builder, input2_arg, .004f, 129); + auto* dq_matmul_output2 = AddQDQNodePair(builder, input2_arg, .004f, 129, use_contrib_qdq); builder.AddNode("MatMul", {dq_output_1, dq_matmul_output2}, {matmul_output}); // add Q - builder.AddQuantizeLinearNode(matmul_output, .0039f, 135, output_arg); + builder.AddQuantizeLinearNode(matmul_output, .0039f, 135, output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["MatMul"], 1); EXPECT_EQ(op_to_count["QLinearMatMul"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 2); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 2); }; TransformerTester(build_test_case, check_graph, TransformerLevel::Level1, TransformerLevel::Level2); }; - test_case({12, 37}, {37, 12}); - test_case({23, 13, 13}, {13, 13}); - test_case({22, 11, 13, 15}, {15, 13}); + test_case({12, 37}, {37, 12}, false /*use_contrib_qdq*/); + test_case({12, 37}, {37, 12}, true /*use_contrib_qdq*/); + test_case({23, 13, 13}, {13, 13}, false /*use_contrib_qdq*/); + test_case({22, 11, 13, 15}, {15, 13}, false /*use_contrib_qdq*/); } TEST(QDQTransformerTests, MatMulIntegerToFloat) { - auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape) { + auto test_case = [&](const std::vector& input1_shape, const std::vector& input2_shape, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input1_arg = builder.MakeInput(input1_shape, std::numeric_limits::min(), @@ -1386,19 +1545,20 @@ TEST(QDQTransformerTests, MatMulIntegerToFloat) { // add DQ auto* dq_output_1 = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input1_arg, .0035f, 135, dq_output_1); + builder.AddDequantizeLinearNode(input1_arg, .0035f, 135, dq_output_1, use_contrib_qdq); auto* dq_output_2 = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input2_arg, .0035f, 135, dq_output_2); + builder.AddDequantizeLinearNode(input2_arg, .0035f, 135, dq_output_2, use_contrib_qdq); builder.AddNode("MatMul", {dq_output_1, dq_output_2}, {output_arg}); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["com.microsoft.MatMulIntegerToFloat"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 0); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 0); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); }; TransformerTester(build_test_case, @@ -1417,13 +1577,15 @@ TEST(QDQTransformerTests, MatMulIntegerToFloat) { 1e-5 /*relative_per_sample_tolerance*/); }; - test_case({12, 37}, {37, 12}); - test_case({23, 13, 13}, {13, 13}); - test_case({22, 11, 13, 15}, {15, 13}); + test_case({12, 37}, {37, 12}, false /*use_contrib_qdq*/); + test_case({12, 37}, {37, 12}, true /*use_contrib_qdq*/); + test_case({23, 13, 13}, {13, 13}, false /*use_contrib_qdq*/); + test_case({22, 11, 13, 15}, {15, 13}, false /*use_contrib_qdq*/); } TEST(QDQTransformerTests, ConvRelu) { - auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, bool is_zp_zero) { + auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, + bool is_zp_zero, bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); @@ -1432,8 +1594,8 @@ TEST(QDQTransformerTests, ConvRelu) { // add QDQ + Conv auto* dq_w_output = builder.MakeIntermediate(); auto* conv_output = builder.MakeIntermediate(); - auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 129); - builder.AddDequantizeLinearNode(weight, .003f, 118, dq_w_output); + auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 129, use_contrib_qdq); + builder.AddDequantizeLinearNode(weight, .003f, 118, dq_w_output, use_contrib_qdq); builder.AddConvNode(dq_conv_output, dq_w_output, conv_output); // add Relu @@ -1441,40 +1603,44 @@ TEST(QDQTransformerTests, ConvRelu) { builder.AddNode("Relu", {conv_output}, {relu_output}); // add Q - builder.AddQuantizeLinearNode(relu_output, .0039f, is_zp_zero ? 0 : 1, output_arg); + builder.AddQuantizeLinearNode(relu_output, .0039f, is_zp_zero ? 0 : 1, output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); if (is_zp_zero) { EXPECT_EQ(op_to_count["QLinearConv"], 1); EXPECT_EQ(op_to_count["Conv"], 0); EXPECT_EQ(op_to_count["Relu"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); } else { EXPECT_EQ(op_to_count["QLinearConv"], 0); EXPECT_EQ(op_to_count["Conv"], 0); EXPECT_EQ(op_to_count["Relu"], 0); EXPECT_EQ(op_to_count["com.microsoft.FusedConv"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 2); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 2); } }; TransformerTester(build_test_case, check_graph, TransformerLevel::Level1, TransformerLevel::Level2); }; - test_case({1, 12, 37}, {32, 12, 5}, true); - test_case({1, 12, 37}, {32, 12, 5}, false); - test_case({1, 23, 13, 13}, {30, 23, 3, 3}, true); - test_case({1, 23, 13, 13}, {30, 23, 3, 3}, false); - test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, true); - test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, false); + test_case({1, 12, 37}, {32, 12, 5}, true, false /*use_contrib_qdq*/); + test_case({1, 12, 37}, {32, 12, 5}, true, true /*use_contrib_qdq*/); + test_case({1, 12, 37}, {32, 12, 5}, false, false /*use_contrib_qdq*/); + test_case({1, 12, 37}, {32, 12, 5}, false, true /*use_contrib_qdq*/); + test_case({1, 23, 13, 13}, {30, 23, 3, 3}, true, false /*use_contrib_qdq*/); + test_case({1, 23, 13, 13}, {30, 23, 3, 3}, false, false /*use_contrib_qdq*/); + test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, true, false /*use_contrib_qdq*/); + test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, false, false /*use_contrib_qdq*/); } TEST(QDQTransformerTests, ConvAveragePoolReshape_UInt8) { - auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape) { + auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); @@ -1483,12 +1649,12 @@ TEST(QDQTransformerTests, ConvAveragePoolReshape_UInt8) { // add QDQ + Conv auto* dq_w_output = builder.MakeIntermediate(); auto* conv_output = builder.MakeIntermediate(); - auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 129); - builder.AddDequantizeLinearNode(weight, .003f, 118, dq_w_output); + auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 129, use_contrib_qdq); + builder.AddDequantizeLinearNode(weight, .003f, 118, dq_w_output, use_contrib_qdq); builder.AddConvNode(dq_conv_output, dq_w_output, conv_output); // add QDQ + AveragePool - auto* dq_averagepool_output = AddQDQNodePair(builder, conv_output, .0035f, 135); + auto* dq_averagepool_output = AddQDQNodePair(builder, conv_output, .0035f, 135, use_contrib_qdq); auto* averagepool_output = builder.MakeIntermediate(); Node& pool_node = builder.AddNode("AveragePool", {dq_averagepool_output}, {averagepool_output}); std::vector pads((weights_shape.size() - 2) * 2, 1); @@ -1497,24 +1663,25 @@ TEST(QDQTransformerTests, ConvAveragePoolReshape_UInt8) { pool_node.AddAttribute("kernel_shape", kernel_shape); // add QDQ + Reshape - auto* dq_reshape_output = AddQDQNodePair(builder, averagepool_output, .0035f, 135); + auto* dq_reshape_output = AddQDQNodePair(builder, averagepool_output, .0035f, 135, use_contrib_qdq); auto* reshape_shape = builder.Make1DInitializer({-1}); auto* reshape_output = builder.MakeIntermediate(); builder.AddNode("Reshape", {dq_reshape_output, reshape_shape}, {reshape_output}); // add Q auto* q_output = builder.MakeIntermediate(); - builder.AddQuantizeLinearNode(reshape_output, .0035f, 135, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg); + builder.AddQuantizeLinearNode(reshape_output, .0035f, 135, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["QLinearConv"], 1); EXPECT_EQ(op_to_count["com.microsoft.QLinearAveragePool"], 1); EXPECT_EQ(op_to_count["Reshape"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); }; TransformerTester(build_test_case, @@ -1534,13 +1701,15 @@ TEST(QDQTransformerTests, ConvAveragePoolReshape_UInt8) { // TODO: fix opset 19 }; - test_case({1, 12, 37}, {32, 12, 5}); - test_case({1, 23, 13, 13}, {30, 23, 3, 3}); - test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}); + test_case({1, 12, 37}, {32, 12, 5}, false /*use_contrib_qdq*/); + test_case({1, 23, 13, 13}, {30, 23, 3, 3}, false /*use_contrib_qdq*/); + test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, false /*use_contrib_qdq*/); + test_case({1, 12, 37}, {32, 12, 5}, true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, ConvAveragePoolReshape_Int8) { - auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape) { + auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); @@ -1549,12 +1718,12 @@ TEST(QDQTransformerTests, ConvAveragePoolReshape_Int8) { // add QDQ + Conv auto* dq_w_output = builder.MakeIntermediate(); auto* conv_output = builder.MakeIntermediate(); - auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 1); - builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output); + auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 1, use_contrib_qdq); + builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output, use_contrib_qdq); builder.AddConvNode(dq_conv_output, dq_w_output, conv_output); // add QDQ + AveragePool - auto* dq_averagepool_output = AddQDQNodePair(builder, conv_output, .0035f, 7); + auto* dq_averagepool_output = AddQDQNodePair(builder, conv_output, .0035f, 7, use_contrib_qdq); auto* averagepool_output = builder.MakeIntermediate(); Node& pool_node = builder.AddNode("AveragePool", {dq_averagepool_output}, {averagepool_output}); std::vector pads((weights_shape.size() - 2) * 2, 1); @@ -1563,7 +1732,7 @@ TEST(QDQTransformerTests, ConvAveragePoolReshape_Int8) { pool_node.AddAttribute("kernel_shape", kernel_shape); // add QDQ + Reshape - auto* dq_reshape_output = AddQDQNodePair(builder, averagepool_output, .0035f, 7); + auto* dq_reshape_output = AddQDQNodePair(builder, averagepool_output, .0035f, 7, use_contrib_qdq); auto* reshape_shape = builder.Make1DInitializer({-1}); auto* reshape_output = builder.MakeIntermediate(); builder.AddNode("Reshape", {dq_reshape_output, reshape_shape}, {reshape_output}); @@ -1571,21 +1740,22 @@ TEST(QDQTransformerTests, ConvAveragePoolReshape_Int8) { // add Q auto* q_output = builder.MakeIntermediate(); if constexpr (QDQIsInt8Allowed()) { - builder.AddQuantizeLinearNode(reshape_output, .0035f, 7, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg); + builder.AddQuantizeLinearNode(reshape_output, .0035f, 7, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg, use_contrib_qdq); } else { - builder.AddQuantizeLinearNode(reshape_output, .0035f, 135, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg); + builder.AddQuantizeLinearNode(reshape_output, .0035f, 135, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg, use_contrib_qdq); } }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["QLinearConv"], 1); EXPECT_EQ(op_to_count["com.microsoft.QLinearAveragePool"], 1); EXPECT_EQ(op_to_count["Reshape"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); }; TransformerTester(build_test_case, @@ -1605,13 +1775,15 @@ TEST(QDQTransformerTests, ConvAveragePoolReshape_Int8) { // TODO: fix opset 19 }; - test_case({1, 12, 37}, {32, 12, 5}); - test_case({1, 23, 13, 13}, {30, 23, 3, 3}); - test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}); + test_case({1, 12, 37}, {32, 12, 5}, false /*use_contrib_qdq*/); + test_case({1, 23, 13, 13}, {30, 23, 3, 3}, false /*use_contrib_qdq*/); + test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, false /*use_contrib_qdq*/); + test_case({1, 12, 37}, {32, 12, 5}, true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, ConvAveragePoolReshape_Int8_Fail) { - auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape) { + auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -128, 127); auto* output_arg = builder.MakeOutput(); @@ -1621,12 +1793,12 @@ TEST(QDQTransformerTests, ConvAveragePoolReshape_Int8_Fail) { auto* dq_output = builder.MakeIntermediate(); auto* dq_w_output = builder.MakeIntermediate(); auto* conv_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input_arg, .004f, 1, dq_output); - builder.AddDequantizeLinearNode(weight, .003f, 118, dq_w_output); + builder.AddDequantizeLinearNode(input_arg, .004f, 1, dq_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(weight, .003f, 118, dq_w_output, use_contrib_qdq); builder.AddConvNode(dq_output, dq_w_output, conv_output); // add QDQ + AveragePool - auto* dq_averagepool_output = AddQDQNodePair(builder, conv_output, .0035f, 7); + auto* dq_averagepool_output = AddQDQNodePair(builder, conv_output, .0035f, 7, use_contrib_qdq); auto* averagepool_output = builder.MakeIntermediate(); Node& pool_node = builder.AddNode("AveragePool", {dq_averagepool_output}, {averagepool_output}); std::vector pads((weights_shape.size() - 2) * 2, 1); @@ -1635,7 +1807,7 @@ TEST(QDQTransformerTests, ConvAveragePoolReshape_Int8_Fail) { pool_node.AddAttribute("kernel_shape", kernel_shape); // add QDQ + Reshape - auto* dq_reshape_output = AddQDQNodePair(builder, averagepool_output, .0035f, 7); + auto* dq_reshape_output = AddQDQNodePair(builder, averagepool_output, .0035f, 7, use_contrib_qdq); auto* reshape_shape = builder.Make1DInitializer({-1}); auto* reshape_output = builder.MakeIntermediate(); builder.AddNode("Reshape", {dq_reshape_output, reshape_shape}, {reshape_output}); @@ -1643,22 +1815,23 @@ TEST(QDQTransformerTests, ConvAveragePoolReshape_Int8_Fail) { // add Q + DQ auto* q_output = builder.MakeIntermediate(); if constexpr (QDQIsInt8Allowed()) { - builder.AddQuantizeLinearNode(reshape_output, .0035f, 7, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg); + builder.AddQuantizeLinearNode(reshape_output, .0035f, 7, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg, use_contrib_qdq); } else { - builder.AddQuantizeLinearNode(reshape_output, .0035f, 135, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg); + builder.AddQuantizeLinearNode(reshape_output, .0035f, 135, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg, use_contrib_qdq); } }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["Conv"], 1); EXPECT_EQ(op_to_count["QLinearConv"], 0); EXPECT_EQ(op_to_count["com.microsoft.QLinearAveragePool"], 1); EXPECT_EQ(op_to_count["Reshape"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 3); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 3); }; // TODO: fix opset 19 @@ -1671,19 +1844,20 @@ TEST(QDQTransformerTests, ConvAveragePoolReshape_Int8_Fail) { 0.01f /*relative_per_sample_tolerance*/); }; - test_case({1, 12, 37}, {32, 12, 5}); - test_case({1, 23, 13, 13}, {30, 23, 3, 3}); - test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}); + test_case({1, 12, 37}, {32, 12, 5}, false /*use_contrib_qdq*/); + test_case({1, 23, 13, 13}, {30, 23, 3, 3}, false /*use_contrib_qdq*/); + test_case({1, 22, 11, 13, 15}, {30, 22, 5, 3, 3}, false /*use_contrib_qdq*/); + test_case({1, 12, 37}, {32, 12, 5}, true /*use_contrib_qdq*/); } template void QDQTransformerLeakyReluTests() { - auto test_case = [&](const std::vector& input_shape) { + auto test_case = [&](const std::vector& input_shape, bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); // add QDQ + LeakyRelu - auto* dq_output = AddQDQNodePair(builder, input_arg, .0035f, 7); + auto* dq_output = AddQDQNodePair(builder, input_arg, .0035f, 7, use_contrib_qdq); auto* leakyrelu_output = builder.MakeIntermediate(); Node& leakyrelu_node = builder.AddNode("LeakyRelu", {dq_output}, {leakyrelu_output}); leakyrelu_node.AddAttribute("alpha", 0.2f); @@ -1693,25 +1867,26 @@ void QDQTransformerLeakyReluTests() { builder.AddQuantizeLinearNode(leakyrelu_output, .0038f, std::numeric_limits::max() / 2, - q_output); + q_output, use_contrib_qdq); builder.AddDequantizeLinearNode(q_output, .0039f, std::numeric_limits::max() / 2, - output_arg); + output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); if constexpr (std::is_same::value) { EXPECT_EQ(op_to_count["com.microsoft.QLinearLeakyRelu"], 1); EXPECT_EQ(op_to_count["LeakyRelu"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); } else { EXPECT_EQ(op_to_count["com.microsoft.QLinearLeakyRelu"], 0); EXPECT_EQ(op_to_count["LeakyRelu"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 2); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 2); } }; @@ -1741,9 +1916,10 @@ void QDQTransformerLeakyReluTests() { std::make_unique(QDQIsInt8Allowed())); }; - test_case({1, 12, 37}); - test_case({1, 23, 13, 13}); - test_case({1, 22, 11, 13, 15}); + test_case({1, 12, 37}, false /*use_contrib_qdq*/); + test_case({1, 12, 37}, true /*use_contrib_qdq*/); + test_case({1, 23, 13, 13}, false /*use_contrib_qdq*/); + test_case({1, 22, 11, 13, 15}, false /*use_contrib_qdq*/); } TEST(QDQTransformerTests, LeakyRelu_S8S8) { @@ -1764,12 +1940,12 @@ TEST(QDQTransformerTests, LeakyRelu_U8S8) { template void QDQTransformerSigmoidTests() { - auto test_case = [&](const std::vector& input_shape) { + auto test_case = [&](const std::vector& input_shape, bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); // add QDQ + Sigmoid - auto* dq_output = AddQDQNodePair(builder, input_arg, .0035f, 7); + auto* dq_output = AddQDQNodePair(builder, input_arg, .0035f, 7, use_contrib_qdq); auto* sigmoid_output = builder.MakeIntermediate(); builder.AddNode("Sigmoid", {dq_output}, {sigmoid_output}); @@ -1778,25 +1954,26 @@ void QDQTransformerSigmoidTests() { builder.AddQuantizeLinearNode(sigmoid_output, .0038f, std::numeric_limits::max() / 2, - q_output); + q_output, use_contrib_qdq); builder.AddDequantizeLinearNode(q_output, .0039f, std::numeric_limits::max() / 2, - output_arg); + output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); if constexpr (std::is_same::value) { EXPECT_EQ(op_to_count["com.microsoft.QLinearSigmoid"], 1); EXPECT_EQ(op_to_count["Sigmoid"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); } else { EXPECT_EQ(op_to_count["com.microsoft.QLinearSigmoid"], 0); EXPECT_EQ(op_to_count["Sigmoid"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 2); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 2); } }; @@ -1826,9 +2003,10 @@ void QDQTransformerSigmoidTests() { std::make_unique(QDQIsInt8Allowed())); }; - test_case({1, 12, 37}); - test_case({1, 23, 13, 13}); - test_case({1, 22, 11, 13, 15}); + test_case({1, 12, 37}, false /*use_contrib_qdq*/); + test_case({1, 12, 37}, true /*use_contrib_qdq*/); + test_case({1, 23, 13, 13}, false /*use_contrib_qdq*/); + test_case({1, 22, 11, 13, 15}, false /*use_contrib_qdq*/); } TEST(QDQTransformerTests, Sigmoid_S8S8) { @@ -1848,7 +2026,8 @@ TEST(QDQTransformerTests, Sigmoid_U8S8) { } TEST(QDQTransformerTests, ConvTranspose_QBackward) { - auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, const std::vector& perms) { + auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, + const std::vector& perms, bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); @@ -1857,8 +2036,8 @@ TEST(QDQTransformerTests, ConvTranspose_QBackward) { // add QDQ + Conv auto* dq_w_output = builder.MakeIntermediate(); auto* conv_output = builder.MakeIntermediate(); - auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 1); - builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output); + auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 1, use_contrib_qdq); + builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output, use_contrib_qdq); builder.AddConvNode(dq_conv_output, dq_w_output, conv_output); // add Transpose @@ -1869,20 +2048,21 @@ TEST(QDQTransformerTests, ConvTranspose_QBackward) { // add Q auto* q_output = builder.MakeIntermediate(); if constexpr (QDQIsInt8Allowed()) { - builder.AddQuantizeLinearNode(transpose_output, .0035f, 7, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg); + builder.AddQuantizeLinearNode(transpose_output, .0035f, 7, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg, use_contrib_qdq); } else { - builder.AddQuantizeLinearNode(transpose_output, .0035f, 135, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg); + builder.AddQuantizeLinearNode(transpose_output, .0035f, 135, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg, use_contrib_qdq); } }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["QLinearConv"], 1); EXPECT_EQ(op_to_count["Transpose"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); }; TransformerTester(build_test_case, @@ -1891,11 +2071,13 @@ TEST(QDQTransformerTests, ConvTranspose_QBackward) { TransformerLevel::Level2); }; - test_case({1, 23, 13, 13}, {30, 23, 3, 3}, {0, 3, 1, 2}); + test_case({1, 23, 13, 13}, {30, 23, 3, 3}, {0, 3, 1, 2}, false /*use_contrib_qdq*/); + test_case({1, 23, 13, 13}, {30, 23, 3, 3}, {0, 3, 1, 2}, true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, QBackward_MutilpleSteps) { - auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape) { + auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); @@ -1904,8 +2086,8 @@ TEST(QDQTransformerTests, QBackward_MutilpleSteps) { // add QDQ + Conv auto* dq_w_output = builder.MakeIntermediate(); auto* conv_output = builder.MakeIntermediate(); - auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 1); - builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output); + auto* dq_conv_output = AddQDQNodePair(builder, input_arg, .004f, 1, use_contrib_qdq); + builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output, use_contrib_qdq); builder.AddConvNode(dq_conv_output, dq_w_output, conv_output); // add MaxPool @@ -1939,22 +2121,23 @@ TEST(QDQTransformerTests, QBackward_MutilpleSteps) { // add Q + DQ auto* q_output = builder.MakeIntermediate(); if constexpr (QDQIsInt8Allowed()) { - builder.AddQuantizeLinearNode(squeeze_output, .0035f, 7, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg); + builder.AddQuantizeLinearNode(squeeze_output, .0035f, 7, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg, use_contrib_qdq); } else { - builder.AddQuantizeLinearNode(squeeze_output, .0035f, 135, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg); + builder.AddQuantizeLinearNode(squeeze_output, .0035f, 135, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg, use_contrib_qdq); } }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["QLinearConv"], 1); EXPECT_EQ(op_to_count["MaxPool"], 1); EXPECT_EQ(op_to_count["Reshape"], 1); EXPECT_EQ(op_to_count["Transpose"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); }; TransformerTester(build_test_case, @@ -1970,18 +2153,20 @@ TEST(QDQTransformerTests, QBackward_MutilpleSteps) { // TODO: fix opset 19 }; - test_case({1, 23, 13, 13}, {30, 23, 3, 3}); + test_case({1, 23, 13, 13}, {30, 23, 3, 3}, false /*use_contrib_qdq*/); + test_case({1, 23, 13, 13}, {30, 23, 3, 3}, true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, ConvTranspose_DQForward) { - auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, const std::vector& perms) { + auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, + const std::vector& perms, bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); auto* weight = builder.MakeInitializer(weights_shape, -64, 64); // add QDQ - auto* dq_output = AddQDQNodePair(builder, input_arg, .004f, 1); + auto* dq_output = AddQDQNodePair(builder, input_arg, .004f, 1, use_contrib_qdq); // add Transpose auto* transpose_output = builder.MakeIntermediate(); @@ -1991,26 +2176,27 @@ TEST(QDQTransformerTests, ConvTranspose_DQForward) { // add Conv auto* dq_w_output = builder.MakeIntermediate(); auto* conv_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output); + builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output, use_contrib_qdq); builder.AddConvNode(transpose_output, dq_w_output, conv_output); // add Q auto* q_output = builder.MakeIntermediate(); if constexpr (QDQIsInt8Allowed()) { - builder.AddQuantizeLinearNode(conv_output, .0035f, 7, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg); + builder.AddQuantizeLinearNode(conv_output, .0035f, 7, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg, use_contrib_qdq); } else { - builder.AddQuantizeLinearNode(conv_output, .0035f, 135, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg); + builder.AddQuantizeLinearNode(conv_output, .0035f, 135, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg, use_contrib_qdq); } }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["QLinearConv"], 1); EXPECT_EQ(op_to_count["Transpose"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); }; TransformerTester(build_test_case, @@ -2033,18 +2219,20 @@ TEST(QDQTransformerTests, ConvTranspose_DQForward) { {"TransposeOptimizer"}); // disable TransposeOptimizer for simplicity }; - test_case({1, 13, 13, 23}, {30, 23, 3, 3}, {0, 3, 1, 2}); + test_case({1, 13, 13, 23}, {30, 23, 3, 3}, {0, 3, 1, 2}, false /*use_contrib_qdq*/); + test_case({1, 13, 13, 23}, {30, 23, 3, 3}, {0, 3, 1, 2}, true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, DQForward_MutilpleSteps) { - auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, const std::vector& perms) { + auto test_case = [&](const std::vector& input_shape, const std::vector& weights_shape, + const std::vector& perms, bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); auto* weight = builder.MakeInitializer(weights_shape, -64, 64); // add Transpose - auto* qdq_output = AddQDQNodePair(builder, input_arg, .004f, 1); + auto* qdq_output = AddQDQNodePair(builder, input_arg, .004f, 1, use_contrib_qdq); auto* transpose_output = builder.MakeIntermediate(); Node& transpose_node = builder.AddNode("Transpose", {qdq_output}, {transpose_output}); transpose_node.AddAttribute("perm", perms); @@ -2070,7 +2258,7 @@ TEST(QDQTransformerTests, DQForward_MutilpleSteps) { // add Conv auto* dq_w_output = builder.MakeIntermediate(); auto* conv_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output); + builder.AddDequantizeLinearNode(weight, .003f, -10, dq_w_output, use_contrib_qdq); builder.AddConvNode(squeeze_output, dq_w_output, conv_output); // Reshape @@ -2081,22 +2269,23 @@ TEST(QDQTransformerTests, DQForward_MutilpleSteps) { // add Q + DQ auto* q_output = builder.MakeIntermediate(); if constexpr (QDQIsInt8Allowed()) { - builder.AddQuantizeLinearNode(reshape_output, .0035f, 7, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg); + builder.AddQuantizeLinearNode(reshape_output, .0035f, 7, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 7, output_arg, use_contrib_qdq); } else { - builder.AddQuantizeLinearNode(reshape_output, .0035f, 135, q_output); - builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg); + builder.AddQuantizeLinearNode(reshape_output, .0035f, 135, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, .0035f, 135, output_arg, use_contrib_qdq); } }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); EXPECT_EQ(op_to_count["QLinearConv"], 1); EXPECT_EQ(op_to_count["MaxPool"], 1); EXPECT_EQ(op_to_count["Reshape"], 1); EXPECT_EQ(op_to_count["Transpose"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); }; TransformerTester(build_test_case, @@ -2116,13 +2305,15 @@ TEST(QDQTransformerTests, DQForward_MutilpleSteps) { // TODO: fix opset 19 }; - test_case({1, 13, 13, 23}, {30, 23, 3, 3}, {0, 3, 1, 2}); + test_case({1, 13, 13, 23}, {30, 23, 3, 3}, {0, 3, 1, 2}, false /*use_contrib_qdq*/); + test_case({1, 13, 13, 23}, {30, 23, 3, 3}, {0, 3, 1, 2}, true /*use_contrib_qdq*/); } TEST(QDQTransformerTests, Clip) { constexpr float epsilon = std::numeric_limits::epsilon(); - auto test_case = [&](float scale, auto zero_point, int clip_count, int opset_version) { + auto test_case = [&](float scale, auto zero_point, int clip_count, int opset_version, + bool use_contrib_qdq = false) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput({1, 32, 112, 112}, std::numeric_limits::min(), @@ -2131,7 +2322,7 @@ TEST(QDQTransformerTests, Clip) { // add DQ auto* dq_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input_arg, .0035f, 7, dq_output); + builder.AddDequantizeLinearNode(input_arg, .0035f, 7, dq_output, use_contrib_qdq); // add Clip auto* clip_output = builder.MakeIntermediate(); @@ -2151,15 +2342,16 @@ TEST(QDQTransformerTests, Clip) { // add Q + DQ auto* q_output = builder.MakeIntermediate(); - builder.AddQuantizeLinearNode(clip_output, scale, zero_point, q_output); - builder.AddDequantizeLinearNode(q_output, scale, zero_point, output_arg); + builder.AddQuantizeLinearNode(clip_output, scale, zero_point, q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(q_output, scale, zero_point, output_arg, use_contrib_qdq); }; auto check_clip_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); EXPECT_EQ(op_to_count["Clip"], clip_count); - EXPECT_EQ(op_to_count["DequantizeLinear"], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 2); }; TransformerTester(build_test_case, check_clip_graph, @@ -2172,49 +2364,72 @@ TEST(QDQTransformerTests, Clip) { std::vector opsets{12, 18, 19}; for (auto opset : opsets) { - test_case(.0235294122248888f, static_cast(-128), 0, opset); // [0, 6] - test_case(.02f, static_cast(-128), 0, opset); // [0, 5.1] - test_case(.03f, static_cast(-128), 1, opset); // [0, 7.65] - test_case(.02f, static_cast(127), 1, opset); // [-5.1 , 0] - test_case(.02f, static_cast(0), 1, opset); // [-2.56, 2.54] - test_case(.04f, static_cast(-97), 1, opset); // [-1.24, 8.96] - test_case(.02352941176f, static_cast(0), 0, opset); // [0, 6] - test_case(.02f, static_cast(0), 0, opset); // [0, 5.1] - test_case(.03f, static_cast(0), 1, opset); // [0, 7.65] - test_case(.02f, static_cast(255), 1, opset); // [-5.1, 0] - test_case(.02f, static_cast(128), 1, opset); // [-2.56, 2.54] - test_case(.04f, static_cast(31), 1, opset); // [-1.24, 8.96] + test_case(.0235294122248888f, static_cast(-128), 0, opset); // [0, 6] + test_case(.0235294122248888f, static_cast(-128), 0, opset, true); // [0, 6] contrib qdq + test_case(.02f, static_cast(-128), 0, opset); // [0, 5.1] + test_case(.02f, static_cast(-128), 0, opset, true); // [0, 5.1] contrib qdq + test_case(.03f, static_cast(-128), 1, opset); // [0, 7.65] + test_case(.03f, static_cast(-128), 1, opset, true); // [0, 7.65] contrib qdq + test_case(.02f, static_cast(127), 1, opset); // [-5.1 , 0] + test_case(.02f, static_cast(127), 1, opset, true); // [-5.1 , 0] contrib qdq + test_case(.02f, static_cast(0), 1, opset); // [-2.56, 2.54] + test_case(.02f, static_cast(0), 1, opset, true); // [-2.56, 2.54] contrib qdq + test_case(.04f, static_cast(-97), 1, opset); // [-1.24, 8.96] + test_case(.04f, static_cast(-97), 1, opset, true); // [-1.24, 8.96] contrib qdq + test_case(.02352941176f, static_cast(0), 0, opset); // [0, 6] + test_case(.02352941176f, static_cast(0), 0, opset, true); // [0, 6] contrib qdq + test_case(.02f, static_cast(0), 0, opset); // [0, 5.1] + test_case(.02f, static_cast(0), 0, opset, true); // [0, 5.1] contrib qdq + test_case(.03f, static_cast(0), 1, opset); // [0, 7.65] + test_case(.03f, static_cast(0), 1, opset, true); // [0, 7.65] contrib qdq + test_case(.02f, static_cast(255), 1, opset); // [-5.1, 0] + test_case(.02f, static_cast(255), 1, opset, true); // [-5.1, 0] contrib qdq + test_case(.02f, static_cast(128), 1, opset); // [-2.56, 2.54] + test_case(.02f, static_cast(128), 1, opset, true); // [-2.56, 2.54] contrib qdq + test_case(.04f, static_cast(31), 1, opset); // [-1.24, 8.96] + test_case(.04f, static_cast(31), 1, opset, true); // [-1.24, 8.96] contrib qdq } // opset_version = 10 - test_case(.02f, static_cast(-128), 0, 10); // [0, 5.1] - test_case(.03f, static_cast(-128), 1, 10); // [0, 7.65] - test_case(.02f, static_cast(0), 0, 10); // [0, 5.1] - test_case(.03f, static_cast(0), 1, 10); // [0, 7.65] + test_case(.02f, static_cast(-128), 0, 10); // [0, 5.1] + test_case(.02f, static_cast(-128), 0, 10, true); // [0, 5.1] contrib qdq + test_case(.03f, static_cast(-128), 1, 10); // [0, 7.65] + test_case(.03f, static_cast(-128), 1, 10, true); // [0, 7.65] contrib qdq + test_case(.02f, static_cast(0), 0, 10); // [0, 5.1] + test_case(.02f, static_cast(0), 0, 10, true); // [0, 5.1] contrib qdq + test_case(.03f, static_cast(0), 1, 10); // [0, 7.65] + test_case(.03f, static_cast(0), 1, 10, true); // [0, 7.65] contrib qdq // difference between lower/upper and min/max are within epsilon for (auto opset : opsets) { - test_case(epsilon, static_cast(-127), 0, opset); // [-epsilon, x] (x <= 6 + epsilon) - test_case((6 + epsilon) / 255, static_cast(-128), 0, opset); // [0, 6 + epsilon] - test_case(epsilon, static_cast(1), 0, opset); // [-epsilon, x] (x <= 6 + epsilon) - test_case((6 + epsilon) / 255, static_cast(0), 0, opset); // [0, 6 + epsilon] + test_case(epsilon, static_cast(-127), 0, opset); // [-epsilon, x] (x <= 6 + epsilon) + test_case(epsilon, static_cast(-127), 0, opset, true); // [-epsilon, x] (x <= 6 + epsilon) + test_case((6 + epsilon) / 255, static_cast(-128), 0, opset); // [0, 6 + epsilon] + test_case((6 + epsilon) / 255, static_cast(-128), 0, opset, true); // [0, 6 + epsilon] + test_case(epsilon, static_cast(1), 0, opset); // [-epsilon, x] (x <= 6 + epsilon) + test_case(epsilon, static_cast(1), 0, opset, true); // [-epsilon, x] (x <= 6 + epsilon) + test_case((6 + epsilon) / 255, static_cast(0), 0, opset); // [0, 6 + epsilon] + test_case((6 + epsilon) / 255, static_cast(0), 0, opset, true); // [0, 6 + epsilon] } } TEST(QDQTransformerTests, Concat) { auto test_case = [&](const std::vector>& input_shapes, int64_t axis, - bool has_input_float = false, - bool has_input_int8 = false, - bool has_output_int8 = false) { - auto check_graph = [&input_shapes, &has_input_float, &has_input_int8, &has_output_int8](InferenceSessionWrapper& session) { + bool has_input_float, + bool has_input_int8, + bool has_output_int8, + bool use_contrib_qdq) { + auto check_graph = [&input_shapes, has_input_float, has_input_int8, has_output_int8, + use_contrib_qdq](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); if (has_input_float || has_input_int8 || has_output_int8) { EXPECT_EQ(op_to_count["com.microsoft.QLinearConcat"], 0); } else { - EXPECT_EQ(op_to_count["QuantizeLinear"], static_cast(input_shapes.size())); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], static_cast(input_shapes.size())); EXPECT_EQ(op_to_count["com.microsoft.QLinearConcat"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); } }; @@ -2222,7 +2437,8 @@ TEST(QDQTransformerTests, Concat) { axis, has_input_float, has_input_int8, - has_output_int8), + has_output_int8, + use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -2234,7 +2450,8 @@ TEST(QDQTransformerTests, Concat) { axis, has_input_float, has_input_int8, - has_output_int8), + has_output_int8, + use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -2246,7 +2463,8 @@ TEST(QDQTransformerTests, Concat) { axis, has_input_float, has_input_int8, - has_output_int8), + has_output_int8, + use_contrib_qdq), check_graph, TransformerLevel::Level1, TransformerLevel::Level2, @@ -2256,22 +2474,63 @@ TEST(QDQTransformerTests, Concat) { std::make_unique(QDQIsInt8Allowed())); }; - test_case({{1, 6, 36}, {1, 3, 36}}, 1); - test_case({{1, 6, 36}, {1, 6, 8}, {1, 6, 2}}, 2); - test_case({{1, 6, 36}, {1, 6, 8}, {1, 6, 2}}, 2, true); - test_case({{1, 6, 36}, {1, 6, 8}, {1, 6, 2}}, 2, false, true); - test_case({{1, 6, 36}, {1, 6, 8}, {1, 6, 2}}, 2, false, false, true); + test_case({{1, 6, 36}, {1, 3, 36}}, 1, + false, // has_input_float + false, // has_input_int8 + false, // has_output_int8 + false); // use_contrib_qdq + test_case({{1, 6, 36}, {1, 3, 36}}, 1, + false, // has_input_float + false, // has_input_int8 + false, // has_output_int8 + true); // use_contrib_qdq + test_case({{1, 6, 36}, {1, 6, 8}, {1, 6, 2}}, 2, + false, // has_input_float + false, // has_input_int8 + false, // has_output_int8 + false); // use_contrib_qdq + test_case({{1, 6, 36}, {1, 6, 8}, {1, 6, 2}}, 2, + true, // has_input_float + false, // has_input_int8 + false, // has_output_int8 + false); // use_contrib_qdq + test_case({{1, 6, 36}, {1, 6, 8}, {1, 6, 2}}, 2, + true, // has_input_float + false, // has_input_int8 + false, // has_output_int8 + true); // use_contrib_qdq + test_case({{1, 6, 36}, {1, 6, 8}, {1, 6, 2}}, 2, + false, // has_input_float + true, // has_input_int8 + false, // has_output_int8 + false); // use_contrib_qdq + test_case({{1, 6, 36}, {1, 6, 8}, {1, 6, 2}}, 2, + false, // has_input_float + true, // has_input_int8 + false, // has_output_int8 + true); // use_contrib_qdq + test_case({{1, 6, 36}, {1, 6, 8}, {1, 6, 2}}, 2, + false, // has_input_float + false, // has_input_int8 + true, // has_output_int8 + false); // use_contrib_qdq + test_case({{1, 6, 36}, {1, 6, 8}, {1, 6, 2}}, 2, + false, // has_input_float + false, // has_input_int8 + true, // has_output_int8 + true); // use_contrib_qdq } template void QDQTransformerSoftmaxTests() { - auto test_case = [&](const std::vector& input_shape, int64_t axis) { + auto test_case = [&](const std::vector& input_shape, int64_t axis, bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -5.f, 5.f); auto* output_arg = builder.MakeOutput(); // add QDQ + Softmax auto* dq_output = AddQDQNodePair(builder, input_arg, .105f, - (std::numeric_limits::max() / 255 * 255) / 2); + (std::numeric_limits::max() / 255 * 255) / 2, + use_contrib_qdq); auto* softmax_output = builder.MakeIntermediate(); auto& softmax_node = builder.AddNode("Softmax", {dq_output}, {softmax_output}); softmax_node.AddAttribute("axis", axis); @@ -2280,25 +2539,26 @@ void QDQTransformerSoftmaxTests() { builder.AddQuantizeLinearNode(softmax_output, 1.0f / (std::numeric_limits::max() + 1), 0, - q_output); + q_output, use_contrib_qdq); builder.AddDequantizeLinearNode(q_output, 1.0f / (std::numeric_limits::max() + 1), 0, - output_arg); + output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); if constexpr (std::is_same::value) { EXPECT_EQ(op_to_count["com.microsoft.QLinearSoftmax"], 1); EXPECT_EQ(op_to_count["Softmax"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 1); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 1); } else { EXPECT_EQ(op_to_count["com.microsoft.QLinearSoftmax"], 0); EXPECT_EQ(op_to_count["Softmax"], 1); - EXPECT_EQ(op_to_count["QuantizeLinear"], 2); - EXPECT_EQ(op_to_count["DequantizeLinear"], 2); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 2); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 2); } }; @@ -2328,8 +2588,9 @@ void QDQTransformerSoftmaxTests() { std::make_unique(QDQIsInt8Allowed())); }; - test_case({1, 12, 37}, -1); - test_case({1, 23, 13, 13}, -2); + test_case({1, 12, 37}, -1, false /*use_contrib_qdq*/); + test_case({1, 12, 37}, -1, true /*use_contrib_qdq*/); + test_case({1, 23, 13, 13}, -2, false /*use_contrib_qdq*/); } TEST(QDQTransformerTests, Softmax_S8S8) { @@ -2347,7 +2608,8 @@ TEST(QDQTransformerTests, QDQPropagation_QBackward) { size_t maxpool_dim, const std::vector& perms, bool add_op_boundary, - bool include_zp) { + bool include_zp, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); @@ -2380,28 +2642,29 @@ TEST(QDQTransformerTests, QDQPropagation_QBackward) { constexpr float qdq_scale = 0.004f; if (include_zp) { constexpr uint8_t qdq_zero_point = 129; - builder.AddQuantizeLinearNode(reshape_output, qdq_scale, qdq_zero_point, output_arg); + builder.AddQuantizeLinearNode(reshape_output, qdq_scale, qdq_zero_point, output_arg, use_contrib_qdq); } else { - builder.AddQuantizeLinearNode(reshape_output, qdq_scale, output_arg); + builder.AddQuantizeLinearNode(reshape_output, qdq_scale, output_arg, use_contrib_qdq); } }; auto check_graph = [&](InferenceSessionWrapper& session) { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); std::vector expected_op_types_in_order{}; if (add_op_boundary) { expected_op_types_in_order.push_back("Sign"); } expected_op_types_in_order.insert( expected_op_types_in_order.end(), - {"QuantizeLinear", "DequantizeLinear", + {qdq_keys.quantize_linear, qdq_keys.dequantize_linear, "Transpose", - "QuantizeLinear", "DequantizeLinear", + qdq_keys.quantize_linear, qdq_keys.dequantize_linear, "MaxPool", - "QuantizeLinear", "DequantizeLinear", + qdq_keys.quantize_linear, qdq_keys.dequantize_linear, "Reshape", - "QuantizeLinear"}); + qdq_keys.quantize_linear}); - const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph()); + const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph(), true); EXPECT_EQ(op_types_in_order, expected_op_types_in_order); }; @@ -2411,10 +2674,13 @@ TEST(QDQTransformerTests, QDQPropagation_QBackward) { TransformerLevel::Level1); }; - test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, false, false); - test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, false, true); - test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, true, false); - test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, true, true); + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, false, false, false /*use_contrib_qdq*/); + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, false, true, false /*use_contrib_qdq*/); + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, true, false, false /*use_contrib_qdq*/); + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, true, true, false /*use_contrib_qdq*/); +#if !defined(DISABLE_CONTRIB_OPS) + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, false, false, true /*use_contrib_qdq*/); +#endif } TEST(QDQTransformerTests, QDQPropagation_DQForward) { @@ -2422,7 +2688,8 @@ TEST(QDQTransformerTests, QDQPropagation_DQForward) { size_t maxpool_dim, const std::vector& perms, bool add_op_boundary, - bool include_zp) { + bool include_zp, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, std::numeric_limits::min(), @@ -2434,9 +2701,9 @@ TEST(QDQTransformerTests, QDQPropagation_DQForward) { auto* dq_output = builder.MakeIntermediate(); if (include_zp) { constexpr uint8_t qdq_zero_point = 129; - builder.AddDequantizeLinearNode(input_arg, qdq_scale, qdq_zero_point, dq_output); + builder.AddDequantizeLinearNode(input_arg, qdq_scale, qdq_zero_point, dq_output, use_contrib_qdq); } else { - builder.AddDequantizeLinearNode(input_arg, qdq_scale, dq_output); + builder.AddDequantizeLinearNode(input_arg, qdq_scale, dq_output, use_contrib_qdq); } // add Transpose @@ -2464,19 +2731,20 @@ TEST(QDQTransformerTests, QDQPropagation_DQForward) { }; auto check_graph = [&](InferenceSessionWrapper& session) { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); std::vector expected_op_types_in_order{ - "DequantizeLinear", + qdq_keys.dequantize_linear, "Transpose", - "QuantizeLinear", "DequantizeLinear", + qdq_keys.quantize_linear, qdq_keys.dequantize_linear, "MaxPool", - "QuantizeLinear", "DequantizeLinear", + qdq_keys.quantize_linear, qdq_keys.dequantize_linear, "Reshape", - "QuantizeLinear", "DequantizeLinear"}; + qdq_keys.quantize_linear, qdq_keys.dequantize_linear}; if (add_op_boundary) { expected_op_types_in_order.push_back("Sign"); } - const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph()); + const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph(), true); EXPECT_EQ(op_types_in_order, expected_op_types_in_order); }; @@ -2495,20 +2763,27 @@ TEST(QDQTransformerTests, QDQPropagation_DQForward) { // TODO: fix opset 19 }; - test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, false, false); - test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, false, true); - test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, true, false); - test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, true, true); + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, false, false, false /*use_contrib_qdq*/); + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, false, true, false /*use_contrib_qdq*/); + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, true, false, false /*use_contrib_qdq*/); + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, true, true, false /*use_contrib_qdq*/); +#if !defined(DISABLE_CONTRIB_OPS) + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, false, false, true /*use_contrib_qdq*/); + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, false, true, true /*use_contrib_qdq*/); + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, true, false, true /*use_contrib_qdq*/); + test_case({1, 13, 13, 23}, 4, {0, 3, 1, 2}, true, true, true /*use_contrib_qdq*/); +#endif } TEST(QDQTransformerTests, QDQPropagation_StopAtOtherQDQ) { - auto test_case = [&](const std::vector& input_shape, bool same_scale, bool same_zp) { + auto test_case = [&](const std::vector& input_shape, bool same_scale, bool same_zp, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); // add QDQ - auto* qdq_output = AddQDQNodePair(builder, input_arg, .004f, 129); + auto* qdq_output = AddQDQNodePair(builder, input_arg, .004f, 129, use_contrib_qdq); // Reshape auto* reshape_output = builder.MakeIntermediate(); @@ -2516,15 +2791,18 @@ TEST(QDQTransformerTests, QDQPropagation_StopAtOtherQDQ) { builder.AddNode("Reshape", {qdq_output, reshape_shape}, {reshape_output}); // add Q - builder.AddQuantizeLinearNode(reshape_output, same_scale ? .004f : .0039f, same_zp ? 129 : 128, output_arg); + builder.AddQuantizeLinearNode(reshape_output, same_scale ? .004f : .0039f, same_zp ? 129 : 128, + output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); const std::vector expected_op_types_in_order{ - "QuantizeLinear", "DequantizeLinear", + qdq_keys.quantize_linear, + qdq_keys.dequantize_linear, "Reshape", - "QuantizeLinear"}; - const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph()); + qdq_keys.quantize_linear}; + const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph(), true); EXPECT_EQ(op_types_in_order, expected_op_types_in_order); }; @@ -2534,14 +2812,23 @@ TEST(QDQTransformerTests, QDQPropagation_StopAtOtherQDQ) { TransformerLevel::Level1); }; - test_case({1, 13, 13, 23}, false, false); - test_case({1, 13, 13, 23}, false, true); - test_case({1, 13, 13, 23}, true, false); - test_case({1, 13, 13, 23}, true, true); + test_case({1, 13, 13, 23}, false, false, false); + test_case({1, 13, 13, 23}, false, true, false); + test_case({1, 13, 13, 23}, true, false, false); + test_case({1, 13, 13, 23}, true, true, false); + +#if !defined(DISABLE_CONTRIB_OPS) + // Use contrib QDQ ops + test_case({1, 13, 13, 23}, false, false, true); + test_case({1, 13, 13, 23}, false, true, true); + test_case({1, 13, 13, 23}, true, false, true); + test_case({1, 13, 13, 23}, true, true, true); +#endif } TEST(QDQTransformerTests, QDQPropagation_Q_No_Parent) { - auto test_case = [&](const std::vector& input_shape, const std::vector& perms) { + auto test_case = [&](const std::vector& input_shape, const std::vector& perms, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -1.f, 1.f); auto* output_arg = builder.MakeOutput(); @@ -2552,15 +2839,17 @@ TEST(QDQTransformerTests, QDQPropagation_Q_No_Parent) { transpose_node.AddAttribute("perm", perms); // add Q - builder.AddQuantizeLinearNode(transpose_output, .0035f, 135, output_arg); + builder.AddQuantizeLinearNode(transpose_output, .0035f, 135, output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); const std::vector expected_op_types_in_order{ - "QuantizeLinear", "DequantizeLinear", + qdq_keys.quantize_linear, + qdq_keys.dequantize_linear, "Transpose", - "QuantizeLinear"}; - const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph()); + qdq_keys.quantize_linear}; + const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph(), true); EXPECT_EQ(op_types_in_order, expected_op_types_in_order); }; @@ -2570,11 +2859,15 @@ TEST(QDQTransformerTests, QDQPropagation_Q_No_Parent) { TransformerLevel::Level1); }; - test_case({1, 13, 13, 23}, {0, 2, 3, 1}); + test_case({1, 13, 13, 23}, {0, 2, 3, 1}, false /*use_contrib_qdq*/); +#if !defined(DISABLE_CONTRIB_OPS) + test_case({1, 13, 13, 23}, {0, 2, 3, 1}, true /*use_contrib_qdq*/); +#endif } TEST(QDQTransformerTests, QDQPropagation_DQ_No_Children) { - auto test_case = [&](const std::vector& input_shape, const std::vector& perms) { + auto test_case = [&](const std::vector& input_shape, const std::vector& perms, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, std::numeric_limits::min(), @@ -2583,7 +2876,7 @@ TEST(QDQTransformerTests, QDQPropagation_DQ_No_Children) { // add DQ auto* dq_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input_arg, .0035f, 135, dq_output); + builder.AddDequantizeLinearNode(input_arg, .0035f, 135, dq_output, use_contrib_qdq); // add transpose Node& transpose_node = builder.AddNode("Transpose", {dq_output}, {output_arg}); @@ -2591,11 +2884,12 @@ TEST(QDQTransformerTests, QDQPropagation_DQ_No_Children) { }; auto check_graph = [&](InferenceSessionWrapper& session) { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); const std::vector expected_op_types_in_order{ - "DequantizeLinear", + qdq_keys.dequantize_linear, "Transpose", - "QuantizeLinear", "DequantizeLinear"}; - const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph()); + qdq_keys.quantize_linear, qdq_keys.dequantize_linear}; + const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph(), true); EXPECT_EQ(op_types_in_order, expected_op_types_in_order); }; @@ -2605,11 +2899,15 @@ TEST(QDQTransformerTests, QDQPropagation_DQ_No_Children) { TransformerLevel::Level1); }; - test_case({1, 13, 13, 23}, {0, 2, 3, 1}); + test_case({1, 13, 13, 23}, {0, 2, 3, 1}, false /*use_contrib_qdq*/); +#if !defined(DISABLE_CONTRIB_OPS) + test_case({1, 13, 13, 23}, {0, 2, 3, 1}, true /*use_contrib_qdq*/); +#endif } TEST(QDQTransformerTests, QDQPropagation_Per_Layer_No_Propagation) { - auto test_case = [&](const std::vector& input_shape, const std::vector& perms) { + auto test_case = [&](const std::vector& input_shape, const std::vector& perms, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, std::numeric_limits::min(), @@ -2620,7 +2918,8 @@ TEST(QDQTransformerTests, QDQPropagation_Per_Layer_No_Propagation) { auto* dq_output = builder.MakeIntermediate(); auto* dq_scale = builder.Make1DInitializer(std::vector(input_shape[1], 0.0035f)); auto* dq_zp = builder.Make1DInitializer(std::vector(input_shape[1], 135)); - builder.AddNode("DequantizeLinear", {input_arg, dq_scale, dq_zp}, {dq_output}); + builder.AddNode("DequantizeLinear", {input_arg, dq_scale, dq_zp}, {dq_output}, + use_contrib_qdq ? kMSDomain : ""); // add transpose Node& transpose_node = builder.AddNode("Transpose", {dq_output}, {output_arg}); @@ -2631,7 +2930,8 @@ TEST(QDQTransformerTests, QDQPropagation_Per_Layer_No_Propagation) { // transpose optimization will change the order of the nodes, // but as we're testing there's no propagation of the DQ what matters is the op counts. auto op_counts = CountOpsInGraph(session.GetGraph()); - EXPECT_EQ(op_counts["DequantizeLinear"], 1); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); + EXPECT_EQ(op_counts[qdq_keys.dequantize_linear], 1); EXPECT_EQ(op_counts["Transpose"], 1); }; @@ -2651,11 +2951,14 @@ TEST(QDQTransformerTests, QDQPropagation_Per_Layer_No_Propagation) { 19); // disable TransposeOptimizer for simplicity }; - test_case({1, 13, 13, 23}, {0, 2, 3, 1}); + test_case({1, 13, 13, 23}, {0, 2, 3, 1}, false /*use_contrib_qdq*/); +#if !defined(DISABLE_CONTRIB_OPS) + test_case({1, 13, 13, 23}, {0, 2, 3, 1}, true /*use_contrib_qdq*/); +#endif } TEST(QDQTransformerTests, QDQPropagation_DQ_Q) { - auto test_case = [&](const std::vector& input_shape) { + auto test_case = [&](const std::vector& input_shape, bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, std::numeric_limits::min(), @@ -2664,17 +2967,18 @@ TEST(QDQTransformerTests, QDQPropagation_DQ_Q) { // add DQ auto* dq_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input_arg, .0035f, 135, dq_output); + builder.AddDequantizeLinearNode(input_arg, .0035f, 135, dq_output, use_contrib_qdq); // add Q - builder.AddQuantizeLinearNode(dq_output, .0035f, 135, output_arg); + builder.AddQuantizeLinearNode(dq_output, .0035f, 135, output_arg, use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); const std::vector expected_op_types_in_order{ - "DequantizeLinear", - "QuantizeLinear"}; - const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph()); + qdq_keys.dequantize_linear, + qdq_keys.quantize_linear}; + const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph(), true); EXPECT_EQ(op_types_in_order, expected_op_types_in_order); }; @@ -2684,7 +2988,10 @@ TEST(QDQTransformerTests, QDQPropagation_DQ_Q) { TransformerLevel::Level1); }; - test_case({1, 13, 13, 23}); + test_case({1, 13, 13, 23}, false /*use_contrib_qdq*/); +#if !defined(DISABLE_CONTRIB_OPS) + test_case({1, 13, 13, 23}, true /*use_contrib_qdq*/); +#endif } TEST(QDQTransformerTests, QDQ_Selector_Test) { @@ -2814,14 +3121,14 @@ TEST(QDQTransformerTests, QDQ_Selector_Test) { // regression test to validate TransposeOptimizer and QDQ Propagation don't loop // see https://github.com/microsoft/onnxruntime/issues/11605 TEST(QDQTransformerTests, QDQPropagation_GH11605_Opset12_19) { - auto test_case = [&]() { + auto test_case = [&](bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput({1, 4, 4}, std::numeric_limits::min(), std::numeric_limits::max()); // add DQ auto* dq_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input_arg, 0.123f, uint8_t(0), dq_output); + builder.AddDequantizeLinearNode(input_arg, 0.123f, uint8_t(0), dq_output, use_contrib_qdq); // add Transpose 0, 2, 1 const std::vector& perms{0, 2, 1}; @@ -2849,13 +3156,14 @@ TEST(QDQTransformerTests, QDQPropagation_GH11605_Opset12_19) { // QDQ cleanup in Level2 removes the unnecessary DQ/Q pair at the start: Tr -> DQ -> SoftM -> Tr // this is the optimal result as the Transpose is using 8-bit data and we have no surplus Q/DQ pairs auto check_graph = [&](InferenceSessionWrapper& session) { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); std::vector expected_op_types_in_order{ "Transpose", - "DequantizeLinear", + qdq_keys.dequantize_linear, "Softmax", "Transpose"}; - const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph()); + const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph(), true); EXPECT_EQ(op_types_in_order, expected_op_types_in_order); }; @@ -2868,18 +3176,21 @@ TEST(QDQTransformerTests, QDQPropagation_GH11605_Opset12_19) { // TODO: fix opset 18, 19 }; - test_case(); + test_case(false); +#if !defined(DISABLE_CONTRIB_OPS) + test_case(true); // Use contrib QDQ ops +#endif } TEST(QDQTransformerTests, QDQPropagation_GH11605_Opset13) { - auto test_case = [&]() { + auto test_case = [&](bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput({1, 4, 4}, std::numeric_limits::min(), std::numeric_limits::max()); // add DQ auto* dq_output = builder.MakeIntermediate(); - builder.AddDequantizeLinearNode(input_arg, 0.123f, uint8_t(0), dq_output); + builder.AddDequantizeLinearNode(input_arg, 0.123f, uint8_t(0), dq_output, use_contrib_qdq); // add Transpose 0, 2, 1 const std::vector& perms{0, 2, 1}; @@ -2907,10 +3218,11 @@ TEST(QDQTransformerTests, QDQPropagation_GH11605_Opset13) { // QDQ cleanup in Level2 removes the unnecessary DQ/Q pair at the start: Tr -> DQ -> SoftM -> Tr // this is the optimal result as the Transpose is using 8-bit data and we have no surplus Q/DQ pairs auto check_graph = [&](InferenceSessionWrapper& session) { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); std::vector expected_op_types_in_order{ - "DequantizeLinear", + qdq_keys.dequantize_linear, "Softmax"}; - const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph()); + const auto op_types_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph(), true); EXPECT_EQ(op_types_in_order, expected_op_types_in_order); }; @@ -2926,14 +3238,18 @@ TEST(QDQTransformerTests, QDQPropagation_GH11605_Opset13) { 19); }; - test_case(); + test_case(false); +#if !defined(DISABLE_CONTRIB_OPS) + test_case(true); // Use contrib QDQ ops +#endif } // test removal of Q->DQ pairs by QDQFinalCleanupTransformer TEST(QDQTransformerTests, QDQFinalCleanupTransformer_BasicQDQCleanup) { auto test_case = [&](const std::vector>& input_shapes, - bool block_removal_of_last_dq = false, - bool block_removal_of_first_dq = false) { + bool block_removal_of_last_dq, + bool block_removal_of_first_dq, + bool use_contrib_qdq = false) { // create model with float input to multiple -> Q -> DQ -> Concat -> Q -> DQ -> output // If we enable cleanup and don't run the QDQ transformer we should drop all the Q->DQ pairs auto build_test_case = [&](ModelTestBuilder& builder) { @@ -2942,7 +3258,7 @@ TEST(QDQTransformerTests, QDQFinalCleanupTransformer_BasicQDQCleanup) { std::vector q_input_args; for (size_t i = 0; i < input_count; i++) { input_args.push_back(builder.MakeInput(input_shapes[i], -1.f, 1.f)); - q_input_args.push_back(AddQDQNodePair(builder, input_args.back(), 0.05f, 128)); + q_input_args.push_back(AddQDQNodePair(builder, input_args.back(), 0.05f, 128, use_contrib_qdq)); if (i == 0 && block_removal_of_first_dq) { // add another edge to the DQ node @@ -2955,10 +3271,11 @@ TEST(QDQTransformerTests, QDQFinalCleanupTransformer_BasicQDQCleanup) { concat_node.AddAttribute("axis", int64_t(1)); auto* q_concat_output = builder.MakeIntermediate(); - builder.AddQuantizeLinearNode(concat_output, 0.05f, 128, q_concat_output); + builder.AddQuantizeLinearNode(concat_output, 0.05f, 128, q_concat_output, use_contrib_qdq); auto* output_arg = builder.MakeOutput(); - Node& dq_node = builder.AddDequantizeLinearNode(q_concat_output, 0.05f, 128, output_arg); + Node& dq_node = builder.AddDequantizeLinearNode(q_concat_output, 0.05f, 128, output_arg, + use_contrib_qdq); if (block_removal_of_last_dq) { // add another edge to the DQ node @@ -2973,10 +3290,12 @@ TEST(QDQTransformerTests, QDQFinalCleanupTransformer_BasicQDQCleanup) { // so we expect twice as many DQ's as original QDQ pairs const int expected_dq_count = expected_qdq_count * 2; - auto check_graph = [expected_qdq_count, expected_dq_count](InferenceSessionWrapper& session) { + auto check_graph = [expected_qdq_count, expected_dq_count, + use_contrib_qdq](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); - EXPECT_EQ(op_to_count["QuantizeLinear"], expected_qdq_count); - EXPECT_EQ(op_to_count["DequantizeLinear"], expected_dq_count); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], expected_qdq_count); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], expected_dq_count); EXPECT_EQ(op_to_count["Concat"], 1); }; @@ -3015,31 +3334,40 @@ TEST(QDQTransformerTests, QDQFinalCleanupTransformer_BasicQDQCleanup) { add_session_options); }; - test_case({{1, 2, 4}, {1, 3, 4}}); - test_case({{1, 2, 4}, {1, 3, 4}}, true); // block removal of first dq - test_case({{1, 2, 4}, {1, 3, 4}}, false, true); // block removal of last dq - test_case({{1, 2, 4}, {1, 3, 4}}, true, true); // block removal of first and last dq + test_case({{1, 2, 4}, {1, 3, 4}}, false, false); // Do not block removal + test_case({{1, 2, 4}, {1, 3, 4}}, true, false); // Block removal of first dq + test_case({{1, 2, 4}, {1, 3, 4}}, false, true); // Block removal of last dq + test_case({{1, 2, 4}, {1, 3, 4}}, true, true); // Block removal of first and last dq + +#if !defined(DISABLE_CONTRIB_OPS) + // Use contrib QDQ ops + test_case({{1, 2, 4}, {1, 3, 4}}, false, false, true); // Do not block removal + test_case({{1, 2, 4}, {1, 3, 4}}, true, false, true); // Block removal of first dq + test_case({{1, 2, 4}, {1, 3, 4}}, false, true, true); // Block removal of last dq + test_case({{1, 2, 4}, {1, 3, 4}}, true, true, true); // Block removal of first and last dq +#endif } TEST(QDQTransformerTests, QDQFinalCleanupTransformer_BasicDQQCleanUp) { - auto test_case = [](bool use_matching_qdq_params) { + auto test_case = [](bool use_matching_qdq_params, bool use_contrib_qdq) { // input -> Q -> DQ -> Q -> DQ -> output auto build_test_case = [&](ModelTestBuilder& builder) { constexpr float scale_1 = 0.05f; constexpr uint8_t zp_1 = 128; auto* const input = builder.MakeInput({1, 2, 4}, -1.0f, 1.0f); - auto* const dq_1_out = AddQDQNodePair(builder, input, scale_1, zp_1); + auto* const dq_1_out = AddQDQNodePair(builder, input, scale_1, zp_1, use_contrib_qdq); const float scale_2 = use_matching_qdq_params ? scale_1 : scale_1 + 0.01f; const uint8_t zp_2 = use_matching_qdq_params ? zp_1 : zp_1 + 1; - AddQDQNodePairWithOutputAsGraphOutput(builder, dq_1_out, scale_2, zp_2); + AddQDQNodePairWithOutputAsGraphOutput(builder, dq_1_out, scale_2, zp_2, use_contrib_qdq); }; auto check_graph = [&](const InferenceSessionWrapper& session) { - const auto ops_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph()); + const auto ops_in_order = GetNodeOpTypesInTopologicalOrder(session.GetGraph(), true); const auto expected_ops_in_order = [&]() -> std::vector { + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); // In either case both DQ and Q will be removed and fused due to DoubleQDQPairsRemover - return {"QuantizeLinear", "DequantizeLinear"}; + return {qdq_keys.quantize_linear, qdq_keys.dequantize_linear}; }(); EXPECT_EQ(ops_in_order, expected_ops_in_order); @@ -3073,13 +3401,19 @@ TEST(QDQTransformerTests, QDQFinalCleanupTransformer_BasicDQQCleanUp) { std::make_unique(false /*enable_q_dq_cleanup*/)); }; - test_case(true); - test_case(false); + test_case(true, false); // Matching QDQ params + test_case(false, false); // Non-matching QDQ params + +#if !defined(DISABLE_CONTRIB_OPS) + // Use contrib QDQ ops + test_case(true, true); // Matching QDQ params + test_case(false, true); // Non-matching QDQ params +#endif } // test removal when we have graph input -> Q/DQ pair -> graph output TEST(QDQTransformerTests, QDQFinalCleanupTransformer_GraphInputToOutput) { - auto test_case = [](bool is_q_dq) { + auto test_case = [](bool is_q_dq, bool use_contrib_qdq) { // create model with input -> Q/DQ pair -> output auto build_test_case = [&](ModelTestBuilder& builder) { constexpr float scale = 0.05f; @@ -3091,21 +3425,24 @@ TEST(QDQTransformerTests, QDQFinalCleanupTransformer_GraphInputToOutput) { NodeArg* first_node_output = builder.MakeIntermediate(); - is_q_dq ? builder.AddQuantizeLinearNode(input, scale, zp, first_node_output) - : builder.AddDequantizeLinearNode(input, scale, zp, first_node_output); + is_q_dq ? builder.AddQuantizeLinearNode(input, scale, zp, first_node_output, use_contrib_qdq) + : builder.AddDequantizeLinearNode(input, scale, zp, first_node_output, use_contrib_qdq); auto* second_node_output = builder.MakeOutput(); - is_q_dq ? builder.AddDequantizeLinearNode(first_node_output, scale, zp, second_node_output) - : builder.AddQuantizeLinearNode(first_node_output, scale, zp, second_node_output); + is_q_dq ? builder.AddDequantizeLinearNode(first_node_output, scale, zp, second_node_output, + use_contrib_qdq) + : builder.AddQuantizeLinearNode(first_node_output, scale, zp, second_node_output, + use_contrib_qdq); }; // with the Q/DQ pair being dropped we should have inserted an Identity node // to connect the graph input to the graph output - auto check_graph = [](InferenceSessionWrapper& session) { + auto check_graph = [use_contrib_qdq](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); - EXPECT_EQ(op_to_count["QuantizeLinear"], 0); - EXPECT_EQ(op_to_count["DequantizeLinear"], 0); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 0); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 0); EXPECT_EQ(op_to_count["Identity"], 1); }; @@ -3153,13 +3490,20 @@ TEST(QDQTransformerTests, QDQFinalCleanupTransformer_GraphInputToOutput) { add_session_options); }; - test_case(true); - test_case(false); + test_case(true, false); // input -> Q -> DQ -> output + test_case(false, false); // input -> DQ -> Q -> output + +#if !defined(DISABLE_CONTRIB_OPS) + // Use contrib QDQ ops + test_case(true, true); // input -> Q -> DQ -> output + test_case(false, true); // input -> DQ -> Q -> output +#endif } #if !defined(DISABLE_CONTRIB_OPS) TEST(QDQTransformerTests, QDQSoftmaxWithDQProducingGraphOutput) { - auto test_case = [&](const std::vector& input_shape, int64_t axis) { + auto test_case = [&](const std::vector& input_shape, int64_t axis, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -5.f, 5.f); auto* dq_output_arg = builder.MakeOutput(); @@ -3169,11 +3513,13 @@ TEST(QDQTransformerTests, QDQSoftmaxWithDQProducingGraphOutput) { builder.AddQuantizeLinearNode(input_arg, .105f, 127, - input_q_output); + input_q_output, + use_contrib_qdq); builder.AddDequantizeLinearNode(input_q_output, .105f, 127, - dq_output_arg); + dq_output_arg, + use_contrib_qdq); // add Softmax auto* softmax_output = builder.MakeIntermediate(); @@ -3185,21 +3531,24 @@ TEST(QDQTransformerTests, QDQSoftmaxWithDQProducingGraphOutput) { builder.AddQuantizeLinearNode(softmax_output, 1.0f / (std::numeric_limits::max() + 1), 0, - q_output); + q_output, + use_contrib_qdq); builder.AddDequantizeLinearNode(q_output, 1.0f / (std::numeric_limits::max() + 1), 0, - output_arg); + output_arg, + use_contrib_qdq); }; auto check_graph = [&](InferenceSessionWrapper& session) { auto op_to_count = CountOpsInGraph(session.GetGraph()); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); // expect fusion because DQ duplication ensures that the node unit has unique DQ nodes EXPECT_EQ(op_to_count["com.microsoft.QLinearSoftmax"], 1); EXPECT_EQ(op_to_count["Softmax"], 0); - EXPECT_EQ(op_to_count["QuantizeLinear"], 1); - EXPECT_EQ(op_to_count["DequantizeLinear"], 2); // duplicate of first DQ and original second DQ + EXPECT_EQ(op_to_count[qdq_keys.quantize_linear], 1); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], 2); // duplicate of first DQ and original second DQ }; TransformerTester(build_test_case, @@ -3227,12 +3576,14 @@ TEST(QDQTransformerTests, QDQSoftmaxWithDQProducingGraphOutput) { 0.01 /*relative_per_sample_tolerance*/); }; - test_case({1, 12, 37}, -1); + test_case({1, 12, 37}, -1, false); + test_case({1, 12, 37}, -1, true); // Use contrib QDQ ops } // DQ produces graph output - special case for DropDQ path where there is only a DQ -> Node with no trailing Q TEST(QDQTransformerTests, DropDQSelectorWithDQProducingGraphOutput) { - auto test_case = [&](const std::vector& input_shape, int64_t axis, bool dq_produces_graph_output) { + auto test_case = [&](const std::vector& input_shape, int64_t axis, bool dq_produces_graph_output, + bool use_contrib_qdq) { auto build_test_case = [&](ModelTestBuilder& builder) { auto* input_arg = builder.MakeInput(input_shape, -5.f, 5.f); auto* output_arg = builder.MakeOutput(); @@ -3241,8 +3592,8 @@ TEST(QDQTransformerTests, DropDQSelectorWithDQProducingGraphOutput) { auto* input_q_output = builder.MakeIntermediate(); auto* dq_output_arg = dq_produces_graph_output ? builder.MakeOutput() : builder.MakeIntermediate(); - builder.AddQuantizeLinearNode(input_arg, .105f, 127, input_q_output); - builder.AddDequantizeLinearNode(input_q_output, .105f, 127, dq_output_arg); + builder.AddQuantizeLinearNode(input_arg, .105f, 127, input_q_output, use_contrib_qdq); + builder.AddDequantizeLinearNode(input_q_output, .105f, 127, dq_output_arg, use_contrib_qdq); // add ArgMax auto* argmax_output = builder.MakeIntermediate(); @@ -3257,11 +3608,12 @@ TEST(QDQTransformerTests, DropDQSelectorWithDQProducingGraphOutput) { const Graph& graph = session.GetGraph(); auto op_to_count = CountOpsInGraph(graph); + const QDQOpKeys qdq_keys = GetQDQOpKeys(use_contrib_qdq); const auto expected_dq_count = dq_produces_graph_output ? 1 // EnsureUniqueDQForNodeUnit duplicates one DQ and DropDQ drops one DQ : 0; // DropDQ drops one DQ - EXPECT_EQ(op_to_count["DequantizeLinear"], expected_dq_count); + EXPECT_EQ(op_to_count[qdq_keys.dequantize_linear], expected_dq_count); const auto& nodes = graph.Nodes(); const auto argmax_node_it = std::find_if(nodes.cbegin(), @@ -3294,8 +3646,14 @@ TEST(QDQTransformerTests, DropDQSelectorWithDQProducingGraphOutput) { }; // test with and without the DQ producing a graph output to validate the test hits DropDQ - test_case({1, 4, 8}, -1, false); - test_case({1, 4, 8}, -1, true); + + // DQ does not produce graph output + test_case({1, 4, 8}, -1, false, false); + test_case({1, 4, 8}, -1, false, true); // Use contrib QDQ ops + + // DQ produces graph output + test_case({1, 4, 8}, -1, true, false); + test_case({1, 4, 8}, -1, true, true); // Use contrib QDQ ops } #endif // !defined(DISABLE_CONTRIB_OPS) diff --git a/onnxruntime/test/optimizer/transpose_optimizer_test.cc b/onnxruntime/test/optimizer/transpose_optimizer_test.cc index 143a1eb8bec59..e5aa36fc379f4 100644 --- a/onnxruntime/test/optimizer/transpose_optimizer_test.cc +++ b/onnxruntime/test/optimizer/transpose_optimizer_test.cc @@ -3502,118 +3502,150 @@ TEST(TransposeOptimizerTests, TestWhere) { } TEST(TransposeOptimizerTests, TestQuantizeLinearScalar) { - auto build_test_case_1 = [&](ModelTestBuilder& builder) { - auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0.0, 1.0); - auto* input1_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {2.3f}); - auto* input2_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {10}); - auto* transpose_1_out_0 = builder.MakeIntermediate(); - auto* quantizelinear_1_out_0 = builder.MakeIntermediate(); - auto* transpose_2_out_0 = builder.MakeOutput(); + auto test_case = [&](const std::string& q_domain = "") { + auto build_test_case_1 = [&](ModelTestBuilder& builder) { + auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0.0, 1.0); + auto* input1_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {2.3f}); + auto* input2_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {10}); + auto* transpose_1_out_0 = builder.MakeIntermediate(); + auto* quantizelinear_1_out_0 = builder.MakeIntermediate(); + auto* transpose_2_out_0 = builder.MakeOutput(); + + auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); + transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); + builder.AddNode("QuantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, {quantizelinear_1_out_0}, + q_domain); + auto& transpose_2 = builder.AddNode("Transpose", {quantizelinear_1_out_0}, {transpose_2_out_0}); + transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); + }; - auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); - transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); - builder.AddNode("QuantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, {quantizelinear_1_out_0}); - auto& transpose_2 = builder.AddNode("Transpose", {quantizelinear_1_out_0}, {transpose_2_out_0}); - transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); - }; + auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { + int transpose_cost = EstimateTransposeCost(session.GetGraph()); + EXPECT_EQ(transpose_cost, 0); + }; - auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { - int transpose_cost = EstimateTransposeCost(session.GetGraph()); - EXPECT_EQ(transpose_cost, 0); + TransformerTester(build_test_case_1, + check_optimized_graph_1, + TransformerLevel::Default, + TransformerLevel::Level1, + /*opset_version*/ {15, 18}); }; - TransformerTester(build_test_case_1, - check_optimized_graph_1, - TransformerLevel::Default, - TransformerLevel::Level1, - /*opset_version*/ {15, 18}); + test_case(); +#if !defined(DISABLE_CONTRIB_OPS) + test_case(kMSDomain); // Use com.microsoft.QuantizeLinear +#endif } TEST(TransposeOptimizerTests, TestQuantizeLinearScalarIgnoreAxis) { - auto build_test_case_1 = [&](ModelTestBuilder& builder) { - auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0.0, 1.0); - auto* input1_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {2.3f}); - auto* input2_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {10}); - auto* transpose_1_out_0 = builder.MakeIntermediate(); - auto* quantizelinear_1_out_0 = builder.MakeIntermediate(); - auto* transpose_2_out_0 = builder.MakeOutput(); + auto test_case = [&](const std::string& q_domain = "") { + auto build_test_case_1 = [&](ModelTestBuilder& builder) { + auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0.0, 1.0); + auto* input1_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {2.3f}); + auto* input2_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {10}); + auto* transpose_1_out_0 = builder.MakeIntermediate(); + auto* quantizelinear_1_out_0 = builder.MakeIntermediate(); + auto* transpose_2_out_0 = builder.MakeOutput(); + + auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); + transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); + auto& quantizelinear_1 = builder.AddNode("QuantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, + {quantizelinear_1_out_0}, q_domain); + quantizelinear_1.AddAttribute("axis", (int64_t)10); + auto& transpose_2 = builder.AddNode("Transpose", {quantizelinear_1_out_0}, {transpose_2_out_0}); + transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); + }; - auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); - transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); - auto& quantizelinear_1 = builder.AddNode("QuantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, {quantizelinear_1_out_0}); - quantizelinear_1.AddAttribute("axis", (int64_t)10); - auto& transpose_2 = builder.AddNode("Transpose", {quantizelinear_1_out_0}, {transpose_2_out_0}); - transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); - }; + auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { + int transpose_cost = EstimateTransposeCost(session.GetGraph()); + EXPECT_EQ(transpose_cost, 0); + }; - auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { - int transpose_cost = EstimateTransposeCost(session.GetGraph()); - EXPECT_EQ(transpose_cost, 0); + TransformerTester(build_test_case_1, + check_optimized_graph_1, + TransformerLevel::Default, + TransformerLevel::Level1, + /*opset_version*/ {15, 18}); }; - TransformerTester(build_test_case_1, - check_optimized_graph_1, - TransformerLevel::Default, - TransformerLevel::Level1, - /*opset_version*/ {15, 18}); + test_case(); +#if !defined(DISABLE_CONTRIB_OPS) + test_case(kMSDomain); // Use com.microsoft.QuantizeLinear +#endif } TEST(TransposeOptimizerTests, TestQuantizeLinearVector) { - auto build_test_case_1 = [&](ModelTestBuilder& builder) { - auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0.0, 1.0); - auto* input1_arg = MakeInput(builder, {{-1}}, {2}, {2.3f, 2.4f}); - auto* input2_arg = MakeInput(builder, {{-1}}, {2}, {10, 12}); - auto* transpose_1_out_0 = builder.MakeIntermediate(); - auto* quantizelinear_1_out_0 = builder.MakeIntermediate(); - auto* transpose_2_out_0 = builder.MakeOutput(); + auto test_case = [&](const std::string& q_domain = "") { + auto build_test_case_1 = [&](ModelTestBuilder& builder) { + auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0.0, 1.0); + auto* input1_arg = MakeInput(builder, {{-1}}, {2}, {2.3f, 2.4f}); + auto* input2_arg = MakeInput(builder, {{-1}}, {2}, {10, 12}); + auto* transpose_1_out_0 = builder.MakeIntermediate(); + auto* quantizelinear_1_out_0 = builder.MakeIntermediate(); + auto* transpose_2_out_0 = builder.MakeOutput(); + + auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); + transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); + auto& quantizelinear_1 = builder.AddNode("QuantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, + {quantizelinear_1_out_0}, q_domain); + quantizelinear_1.AddAttribute("axis", (int64_t)0); + auto& transpose_2 = builder.AddNode("Transpose", {quantizelinear_1_out_0}, {transpose_2_out_0}); + transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); + }; - auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); - transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); - auto& quantizelinear_1 = builder.AddNode("QuantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, {quantizelinear_1_out_0}); - quantizelinear_1.AddAttribute("axis", (int64_t)0); - auto& transpose_2 = builder.AddNode("Transpose", {quantizelinear_1_out_0}, {transpose_2_out_0}); - transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); - }; + auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { + int transpose_cost = EstimateTransposeCost(session.GetGraph()); + EXPECT_EQ(transpose_cost, 0); + }; - auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { - int transpose_cost = EstimateTransposeCost(session.GetGraph()); - EXPECT_EQ(transpose_cost, 0); + TransformerTester(build_test_case_1, + check_optimized_graph_1, + TransformerLevel::Default, + TransformerLevel::Level1, + /*opset_version*/ {15, 18}); }; - TransformerTester(build_test_case_1, - check_optimized_graph_1, - TransformerLevel::Default, - TransformerLevel::Level1, - /*opset_version*/ {15, 18}); + test_case(); +#if !defined(DISABLE_CONTRIB_OPS) + test_case(kMSDomain); // Use com.microsoft.QuantizeLinear +#endif } TEST(TransposeOptimizerTests, TestQuantizeLinearVectorUnknownRank) { - auto build_test_case_1 = [&](ModelTestBuilder& builder) { - auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0.0, 1.0); - auto* input1_arg = MakeInput(builder, std::nullopt, {3}, {2.3f, 2.4f, 2.5f}); - auto* input2_arg = MakeInput(builder, std::nullopt, {3}, {10, 12, 13}); - auto* transpose_1_out_0 = builder.MakeIntermediate(); - auto* quantizelinear_1_out_0 = builder.MakeIntermediate(); - auto* transpose_2_out_0 = builder.MakeOutput(); + auto test_case = [&](const std::string& q_domain = "") { + auto build_test_case_1 = [&](ModelTestBuilder& builder) { + auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0.0, 1.0); + auto* input1_arg = MakeInput(builder, std::nullopt, {3}, {2.3f, 2.4f, 2.5f}); + auto* input2_arg = MakeInput(builder, std::nullopt, {3}, {10, 12, 13}); + auto* transpose_1_out_0 = builder.MakeIntermediate(); + auto* quantizelinear_1_out_0 = builder.MakeIntermediate(); + auto* transpose_2_out_0 = builder.MakeOutput(); + + auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); + transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); + auto& quantizelinear_1 = builder.AddNode("QuantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, + {quantizelinear_1_out_0}, q_domain); + quantizelinear_1.AddAttribute("axis", (int64_t)1); + auto& transpose_2 = builder.AddNode("Transpose", {quantizelinear_1_out_0}, {transpose_2_out_0}); + transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); + }; - auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); - transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); - auto& quantizelinear_1 = builder.AddNode("QuantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, {quantizelinear_1_out_0}); - quantizelinear_1.AddAttribute("axis", (int64_t)1); - auto& transpose_2 = builder.AddNode("Transpose", {quantizelinear_1_out_0}, {transpose_2_out_0}); - transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); - }; + auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { + int transpose_cost = EstimateTransposeCost(session.GetGraph()); + EXPECT_EQ(transpose_cost, 0); + }; - auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { - int transpose_cost = EstimateTransposeCost(session.GetGraph()); - EXPECT_EQ(transpose_cost, 0); + TransformerTester(build_test_case_1, + check_optimized_graph_1, + TransformerLevel::Default, + TransformerLevel::Level1, + /*opset_version*/ {15, 18}); }; - TransformerTester(build_test_case_1, - check_optimized_graph_1, - TransformerLevel::Default, - TransformerLevel::Level1, - /*opset_version*/ {15, 18}); + test_case(); +#if !defined(DISABLE_CONTRIB_OPS) + test_case(kMSDomain); // Use com.microsoft.QuantizeLinear +#endif } TEST(TransposeOptimizerTests, TestQuantizeLinearScalarOpset10) { @@ -3645,61 +3677,77 @@ TEST(TransposeOptimizerTests, TestQuantizeLinearScalarOpset10) { } TEST(TransposeOptimizerTests, TestDequantizeLinearScalarIgnoreAxis) { - auto build_test_case_1 = [&](ModelTestBuilder& builder) { - auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0, 5); - auto* input1_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {2.3f}); - auto* input2_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {10}); - auto* transpose_1_out_0 = builder.MakeIntermediate(); - auto* dequantizelinear_1_out_0 = builder.MakeIntermediate(); - auto* transpose_2_out_0 = builder.MakeOutput(); + auto test_case = [&](const std::string& dq_domain = "") { + auto build_test_case_1 = [&](ModelTestBuilder& builder) { + auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0, 5); + auto* input1_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {2.3f}); + auto* input2_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {10}); + auto* transpose_1_out_0 = builder.MakeIntermediate(); + auto* dequantizelinear_1_out_0 = builder.MakeIntermediate(); + auto* transpose_2_out_0 = builder.MakeOutput(); + + auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); + transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); + auto& dequantizelinear_1 = builder.AddNode("DequantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, + {dequantizelinear_1_out_0}, dq_domain); + dequantizelinear_1.AddAttribute("axis", (int64_t)10); + auto& transpose_2 = builder.AddNode("Transpose", {dequantizelinear_1_out_0}, {transpose_2_out_0}); + transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); + }; - auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); - transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); - auto& dequantizelinear_1 = builder.AddNode("DequantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, {dequantizelinear_1_out_0}); - dequantizelinear_1.AddAttribute("axis", (int64_t)10); - auto& transpose_2 = builder.AddNode("Transpose", {dequantizelinear_1_out_0}, {transpose_2_out_0}); - transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); - }; + auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { + int transpose_cost = EstimateTransposeCost(session.GetGraph()); + EXPECT_EQ(transpose_cost, 0); + }; - auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { - int transpose_cost = EstimateTransposeCost(session.GetGraph()); - EXPECT_EQ(transpose_cost, 0); + TransformerTester(build_test_case_1, + check_optimized_graph_1, + TransformerLevel::Default, + TransformerLevel::Level1, + /*opset_version*/ {15, 18}); }; - TransformerTester(build_test_case_1, - check_optimized_graph_1, - TransformerLevel::Default, - TransformerLevel::Level1, - /*opset_version*/ {15, 18}); + test_case(); +#if !defined(DISABLE_CONTRIB_OPS) + test_case(kMSDomain); // Use com.microsoft.DequantizeLinear +#endif } TEST(TransposeOptimizerTests, TestDequantizeLinearVector) { - auto build_test_case_1 = [&](ModelTestBuilder& builder) { - auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0, 5); - auto* input1_arg = MakeInput(builder, {{2}}, {2}, {2.3f, 2.4f}); - auto* input2_arg = MakeInput(builder, {{2}}, {2}, {10, 12}); - auto* transpose_1_out_0 = builder.MakeIntermediate(); - auto* dequantizelinear_1_out_0 = builder.MakeIntermediate(); - auto* transpose_2_out_0 = builder.MakeOutput(); + auto test_case = [&](const std::string& dq_domain = "") { + auto build_test_case_1 = [&](ModelTestBuilder& builder) { + auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0, 5); + auto* input1_arg = MakeInput(builder, {{2}}, {2}, {2.3f, 2.4f}); + auto* input2_arg = MakeInput(builder, {{2}}, {2}, {10, 12}); + auto* transpose_1_out_0 = builder.MakeIntermediate(); + auto* dequantizelinear_1_out_0 = builder.MakeIntermediate(); + auto* transpose_2_out_0 = builder.MakeOutput(); + + auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); + transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); + auto& dequantizelinear_1 = builder.AddNode("DequantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, + {dequantizelinear_1_out_0}, dq_domain); + dequantizelinear_1.AddAttribute("axis", (int64_t)-4); + auto& transpose_2 = builder.AddNode("Transpose", {dequantizelinear_1_out_0}, {transpose_2_out_0}); + transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); + }; - auto& transpose_1 = builder.AddNode("Transpose", {input0_arg}, {transpose_1_out_0}); - transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); - auto& dequantizelinear_1 = builder.AddNode("DequantizeLinear", {transpose_1_out_0, input1_arg, input2_arg}, {dequantizelinear_1_out_0}); - dequantizelinear_1.AddAttribute("axis", (int64_t)-4); - auto& transpose_2 = builder.AddNode("Transpose", {dequantizelinear_1_out_0}, {transpose_2_out_0}); - transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); - }; + auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { + int transpose_cost = EstimateTransposeCost(session.GetGraph()); + EXPECT_EQ(transpose_cost, 0); + }; - auto check_optimized_graph_1 = [&](InferenceSessionWrapper& session) { - int transpose_cost = EstimateTransposeCost(session.GetGraph()); - EXPECT_EQ(transpose_cost, 0); + TransformerTester(build_test_case_1, + check_optimized_graph_1, + TransformerLevel::Default, + TransformerLevel::Level1, + /*opset_version*/ {15, 18}); }; - TransformerTester(build_test_case_1, - check_optimized_graph_1, - TransformerLevel::Default, - TransformerLevel::Level1, - /*opset_version*/ {15, 18}); + test_case(); +#if !defined(DISABLE_CONTRIB_OPS) + test_case(kMSDomain); // Use com.microsoft.DequantizeLinear +#endif } TEST(TransposeOptimizerTests, TestDequantizeLinearNoAxis) { @@ -3731,47 +3779,56 @@ TEST(TransposeOptimizerTests, TestDequantizeLinearNoAxis) { } TEST(TransposeOptimizerTests, TestDequantizeLinearTransposePropagation) { - auto build_test_case_1 = [&](ModelTestBuilder& builder) { - auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0, 5); - auto* input1_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {2.3f}); - auto* input2_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {10}); - auto* dequantizelinear_1_out_0 = builder.MakeIntermediate(); - auto* transpose_1_out_0 = builder.MakeOutput(); - auto* transpose_2_out_0 = builder.MakeOutput(); - - builder.AddNode("DequantizeLinear", {input0_arg, input1_arg, input2_arg}, {dequantizelinear_1_out_0}); - - auto& transpose_1 = builder.AddNode("Transpose", {dequantizelinear_1_out_0}, {transpose_1_out_0}); - transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); - - auto& transpose_2 = builder.AddNode("Transpose", {dequantizelinear_1_out_0}, {transpose_2_out_0}); - transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); - }; - - auto check_graph = [&](InferenceSessionWrapper& session) { - const auto& graph = session.GetGraph(); - - const auto op_count = CountOpsInGraph(graph); - decltype(op_count) expected_op_count{ - {"DequantizeLinear", 2}, // EnsureUniqueDQForNodeUnit should duplicate the original DQ - {"Transpose", 2}, + auto test_case = [&](const std::string& dq_domain = "") { + auto build_test_case_1 = [&](ModelTestBuilder& builder) { + auto* input0_arg = MakeInput(builder, {{2, -1, 6, 3}}, {2, 4, 6, 3}, 0, 5); + auto* input1_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {2.3f}); + auto* input2_arg = MakeInput(builder, {std::vector{}}, std::vector{}, {10}); + auto* dequantizelinear_1_out_0 = builder.MakeIntermediate(); + auto* transpose_1_out_0 = builder.MakeOutput(); + auto* transpose_2_out_0 = builder.MakeOutput(); + + builder.AddNode("DequantizeLinear", {input0_arg, input1_arg, input2_arg}, {dequantizelinear_1_out_0}, + dq_domain); + + auto& transpose_1 = builder.AddNode("Transpose", {dequantizelinear_1_out_0}, {transpose_1_out_0}); + transpose_1.AddAttribute("perm", std::vector{0, 3, 1, 2}); + + auto& transpose_2 = builder.AddNode("Transpose", {dequantizelinear_1_out_0}, {transpose_2_out_0}); + transpose_2.AddAttribute("perm", std::vector{0, 2, 3, 1}); }; - ASSERT_EQ(op_count, expected_op_count); - // Transposes should be pushed, so check for Transpose -> DQ edges - for (const auto& node : graph.Nodes()) { - if (node.OpType() == "Transpose") { - ASSERT_EQ(node.GetOutputEdgesCount(), static_cast(1)); - ASSERT_EQ(node.OutputEdgesBegin()->GetNode().OpType(), "DequantizeLinear"); + auto check_graph = [&](InferenceSessionWrapper& session) { + const auto& graph = session.GetGraph(); + + const char* dq_count_key = (dq_domain == kMSDomain) ? "com.microsoft.DequantizeLinear" : "DequantizeLinear"; + const auto op_count = CountOpsInGraph(graph); + decltype(op_count) expected_op_count{ + {dq_count_key, 2}, // EnsureUniqueDQForNodeUnit should duplicate the original DQ + {"Transpose", 2}, + }; + ASSERT_EQ(op_count, expected_op_count); + + // Transposes should be pushed, so check for Transpose -> DQ edges + for (const auto& node : graph.Nodes()) { + if (node.OpType() == "Transpose") { + ASSERT_EQ(node.GetOutputEdgesCount(), static_cast(1)); + ASSERT_EQ(node.OutputEdgesBegin()->GetNode().OpType(), "DequantizeLinear"); + } } - } + }; + + TransformerTester(build_test_case_1, + check_graph, + TransformerLevel::Default, + TransformerLevel::Level1, + /*opset_version*/ 10); }; - TransformerTester(build_test_case_1, - check_graph, - TransformerLevel::Default, - TransformerLevel::Level1, - /*opset_version*/ 10); + test_case(); +#if !defined(DISABLE_CONTRIB_OPS) + test_case(kMSDomain); // Use com.microsoft.DequantizeLinear +#endif } TEST(TransposeOptimizerTests, TestCast) { diff --git a/onnxruntime/test/testdata/transform/convert_qdq_ops_to_ms_domain.py b/onnxruntime/test/testdata/transform/convert_qdq_ops_to_ms_domain.py new file mode 100644 index 0000000000000..3df127f5d356d --- /dev/null +++ b/onnxruntime/test/testdata/transform/convert_qdq_ops_to_ms_domain.py @@ -0,0 +1,74 @@ +""" +Loads a model and updates the domain of QuantizeLinear and DequantizeLinear nodes to 'com.microsoft'. +This is used to create models for testing QDQ transformations with the contrib QDQ ops. + +Usage: python3 convert_qdq_ops_to_ms_domain.py + +Models created with this script: +- qdq_with_multi_consumer_dq_nodes.fixed.qdq_contrib.onnx +- fusion/constant_folding_dequantizelinear.qdq_contrib.onnx +- fusion/constant_folding_qdq_node_unit.qdq_contrib.onnx +- fusion/constant_folding_qdq_node_unit.graph_output.qdq_contrib.onnx +""" +import os +import sys + +import onnx + +QDQ_OPS = ("QuantizeLinear", "DequantizeLinear") + + +def print_usage(prog_name: str): + """ + Prints the program's command-line arguments and usage. + """ + + print(f"Usage: {prog_name} ") + + +def update_qdq_node_domains(graph): + """ + Updates the domain of all QuantizeLinear and DequantizeLinear nodes + in a graph to 'com.microsoft'. + """ + + for node in graph.node: + # Handle subgraphs: + for attr in node.attribute: + if attr.type == onnx.AttributeProto.GRAPH: + update_qdq_node_domains(attr.g) + elif attr.type == onnx.AttributeProto.GRAPHS: + for subgraph in attr.graphs: + update_qdq_node_domains(subgraph) + + # Update Q/DQ domains + if node.op_type in QDQ_OPS: + node.domain = "com.microsoft" + + +def main(): + prog_name, *argv = sys.argv + + if len(argv) != 1: + print_usage(prog_name) + sys.exit(1) + + model = onnx.load(argv[0]) + + has_ms_domain = False + for opset in model.opset_import: + if opset.domain == "com.microsoft": + has_ms_domain = True + break + + if not has_ms_domain: + model.opset_import.extend([onnx.helper.make_opsetid("com.microsoft", 1)]) + + update_qdq_node_domains(model.graph) + onnx.checker.check_model(model, True) + base_model_name = os.path.splitext(argv[0])[0] + onnx.save_model(model, base_model_name + ".qdq_contrib.onnx") + + +if __name__ == "__main__": + main() diff --git a/onnxruntime/test/testdata/transform/fusion/constant_folding_dequantizelinear.qdq_contrib.onnx b/onnxruntime/test/testdata/transform/fusion/constant_folding_dequantizelinear.qdq_contrib.onnx new file mode 100644 index 0000000000000000000000000000000000000000..a111597ad0f414b799a28ffe88cd8c4ce5da0d47 GIT binary patch literal 2786 zcmb7GTW}NC8J3Jq#ipc+A%vPtXx)dlrWKe52-s;-W9niX-{q^sT3)TBm8D&2wX6GW zb+vV~EK8Op*|L0*jPJ(80fKQUgG1YNNFUmVOxu}uI@8xab*7gGbu9xCU}*DjW_G{- za=!DQf6o80zE*q?Ls2b9NeWN`(ZeN0r#R)DQx|R=0(?y15QL#>un!Taz$Xqu;2RiD zs3Za$7J{e2a{|~1lyPRGq z(o68KSRM=Tl03jGF9Gbbmu*TkEB^6mwtk!RG2_unkrbU@qn;yP&MV%B*_fZpyyNW|kIGRz_u2Ufbs` zvTXJMi_K!O_ObS{3##^R&06b$9w%uCMT2#hP?x2z%{^e6HCRR8b##w^GP@XSlPL6q z;25&rTXVH8I+(a(R1J(xIk;{J2C>u~7dG#4E7V&FlWaKp$S3yk(|`O}d*zB|Q)6rB zU+LGhbIbcFbkLx4`bN9svR{Pz{J-=j`B*R&c8xfdpKpH-S)~@^wqJgqZ$Kth5((d4 zsz&~tZneghin(HBs_E=b+i35XtRaq%L|mRE zdc#w_Sa~Bf9AEdf5UYHdVlryBZ9F^5YgYq>S4o=`3S zZ9Q0ZTVq<~sjHws?0t;}3D?S3!PvCKzo^&EQ)|$@3~33j+;{Obz83SWS=C|kP<`6D z{?N(ADPG3Wjw`%(eZ8CV^*d@*LUp}6oL(HfHd=osE;dnJl6if|m>Svi)S6uKj%Jry zE2@Y_&dJap#?*Dm_WDGNfvm3)T;FKpG>sEIapEstUU%GP5)YBmjhV@42Ne-HbTjq! z%{?)n#j+?Iq*~gB)L2hyWi6aYcZH~tOo-_R}N!m{0Zqgh_Ye$S*MBg-Nj~c`-YW94_`JssFRBfsmW+Q3a9xz2}qPQs% z(ngWaODcVQXw$V!1)}0=t3mJ98FW2R=XkoqjAG(5R|B?5=bd}X){ql|)k-TKj7@c< zr#n|u7fIBz)L_z0d`cyhqk*yJVBDWDe&IWFHK>+`7+8sS{UjiZoOEf5^0 zfb}+cPiKgvcILiZQ+rN7Bz^SL*uY&EKBE*n_11_>6VYWn55a)tF0}438dp(&GSHie zLc_vt3#KrAdBwZ)__&fOMqH>j|@yHE> zLjh72?5arWoeNFy4C3*G+@$BWaU&_zTx;=mlL*&+ab6hMB*$De8o;F@JIfvd~;;ATiMr)n$|Do6Y@WUHo$O7ei`C*pV7m-fjyn zsh2K@&)ZjK>z2gtF7qX)uRWYLSGIbG)9L`O!l2Fw_ZHb7GmlQT{>IPi4RVvc(#D9q zQL+1iVZ5^y`7c;*PL{p%_}P2Oye#WsvDy2p_7VZLAA&^;Mo~7Kz4aP{ahv{z3mA;L ze`d3#^#9Lkod2I}w)@}N?7vtH#(f5Z@ez&voyNWgFc{2#WV2<*SWGrEpHogu)E3$g z-8`2w?$0;oghg#3oruxG=sx~1w=XQ9?L3;Mp!z?duht*EpzMUm~D}T%Z0E<3gA#Gqw|L38q z3IPWDVAWeYx-CySaZpI2f#_l%2aY{!kjq4{fFN;N_~9zr;dKBIAe@{al*{O;b{Jqk z`6qAzf+DDx_IevQ`UF+ruo{6ioIL9rz_Y@Q4mb)N&h4w>&?*eZQGtwu?l3B@dIz8d zVm?(^tAKGt4x@xXMn}>FFkH-0=m-plBpd+&qiiOA0yy%eXywoxKn0FLuu4_@#tx2? Lw3L;@yPEdDjN3p6 literal 0 HcmV?d00001 diff --git a/onnxruntime/test/testdata/transform/fusion/constant_folding_qdq_node_unit.graph_output.qdq_contrib.onnx b/onnxruntime/test/testdata/transform/fusion/constant_folding_qdq_node_unit.graph_output.qdq_contrib.onnx new file mode 100644 index 0000000000000000000000000000000000000000..c30ffcc8d9f249072c57d595cfdacf56b240f6d4 GIT binary patch literal 1994 zcmbVN-EPw`6rQ`Hm4nu0p&MH?v91yb6$oi3#5Qp;s&PSuG$z5#ZmcZvHjl7=mhEi) zn1x>mmbsAeu`7s=9o$mCzMBsV`c~|@Jpz_)z953*kW(hj7y8}~>F0Y(8F2;PO=OtJ zXOSP_MABZ2{UpSW<%|o|BN~RDdTZ9>fgMm!@R%P;_=;dsIA%5EvBU|Bql6Genfamr zi6#F?7;d#2w{u%_4$AHsiMOY=(RS$@nrW}R@_pjhDXUB2vRGv!bskC@DETp>>oV9mFma2Ew?em=&d^j4x|O$s4y@ z>y2skVb0|me;RO~^qP$dGD@B6UzRSKYOWfnQ~bbm%aqd1=9)_JpQoma{xp|W4!w2V z^26U>UN#q0wAMsbb$>1Ixw#0n3Nj5SRghZKYL(M<(184$*ZMrt@2}>WX3W;f>&f3B zfQkZ`fYXkoq^q^CQDdSG( z9^4&z^=;-IpL$9cRREXT+29L$#&l;1YMEnft2spkb9**RJRUrWK2dUNU)i09;bEf!?t?Xqgq(#$(10-M_SC*A!^M#uo=YygW80O48Av>a#ldj4 XRV5q>?vXe)=To~#)W{CV})h<*>qY_-WSZU*dMPnz9BaI$8 zaOM#>@g`h&01i9~7jQZnRwr3eG*y}TW+u4b5DM> z;7r_R2#RzQ?d-yCL`n|I)z4om~qbs$ee9xXjn8)kKu>Nxpz hxzw#^{!*XIGhOLsO+rzb?nr|*jn$ zd>L`$#$Ep9`@jFc@5&dy^!qCF^XgA6GYAT=?Hk(h_w4xAS9VBf9pyX`gm)atJJz5Dj=+*6rv?R?$if4}B!`0I6V-?@8gSAFLb zeeK=xw&(Tu&ibBRv-@_u>-lw}@2gdIRx4BUjh9|~2|Zo)_DgtZ=iB!^u?RO#&F|Y) zd-+SAe&r3{x2H1qQ*%>$YL$9re`Cj=UcTe!w|rBzR+-+nr|~SSz4b_+^ZCb@``yoP z6gw|i@9Qq5tE=8|DNkL(if==_uYG>v{oIdlQ@pq6VCt{D*!`z_@T<1fYHQDb(V1NH z)6Y2*+>haO*i|g8SQ;$wgXS>P((nx~_e8>B6n&p2@bP3%xvjzLz+Y*I)OXdxG1~wkNpt zoHN1wr;qU>8h?>iThHX^i?)o6u6fd1T)(TaXY!^>ZSFO{(5&y<`{bXuU%O)~sJ@*} z9$%lEwgBDR>gs~^yyZL5-}@gAN*{}%u6hCbeEIkP!*@JADtq`vJ6?dd{!`w#TfY+< z_YB2(L3-U<(SGNDy6xNhLTuk_7WOXeY&NE5>pS;0t2y>!Pd`(~RvcV6?lH@x<$S8uUdPj*l~^YSa+29E9Om+W|Py-}Om zyRZJ1%ieO?+GQ`k`lUO*XYbVPuErG5@Dta5XvaUB*;kw2H@mYwxvRSGEiZb@i^`@y zxZ{fIuD$15J>GIhc-f8%hP2})PYv&J&+tPRT)Du&|L_Hu|Dg@N>e+8?Wo}n5+wtrV zZ+~q6>VmO7!xTN&&8weTrLp*X?w&O<1(8t}f7;t-!># z`g@wXy#Re~rKj6~MVBy>f65!Tm7m$xbf0bqu6jXw-P(vHTl>vRn$M1Z6kPDqS8nMx zckSJK)wWu_{6%lwyLU%?`HpS1-f{JI-u$@fc+IxjUSJ!xh7*|9&ZjB=r98BicCTN) z4QTVsvAnRW^ZIQq=QA|y(jMDdg#bM-W$9noHQG9JR_@x<*XSAO{yWmmHCwOY#^pOM zZF??)vM$0`Zn;4hOKm-g&tJa{eEwtyCEk2qplDn897B2%1LO14@OgRV0&DUFjxH}h z&1O9Nt*tD>1^E1#{ceA3Pax>BkEcuHRnK*EYni$LpI@x^i%8Bz7`!d<`9*VdTjKMl zo_cohx+Ol}PAAVx6rUG9-_Gc^gn8TPWSjVWTU}kCHCtinZT0swb$bE&+)7WkfzK~t zCjXQ-ZYw0ct?53)7QG<7Zf(T=B)oU*TO)rur-!iDV zSV7a1LCw!>BdB?@gMXZ$219x>aKPuM1vTZB3oP^#kxO~`X?FP8Z*665F9>R$+3)tp z_C(xL_VElW_*^%)maz+hnv3;*G0AymLCKau%|$D-whU^Xdg^%uHQVXr|G%JSJEPkY z8*itRZ3Z>l>gocm*(w0pR)0@Zw-=z#t@LyoLCq!1P~hm1k?NzUn2Hz5I%Qe)*^0bo~?A_0o!; zuS>i7RWG^x>dT|cu774l>OZI%y!sVoA6Nd%^yjXBW?3s*KGxX>%0?f5+o_+weyhr` zZLRcUoxbnN%U*KX^>5#jet#nOd_l(gsnWd126)32?TJ^t;p*Ga<9Ba4#HS+Mi}(1H zfxF>~3H-bI=JNyHI=}tc38ssmp?PAQ4}AK2-f;D;=g0XSJAU-|Ft^w1lPGMPlmEhH zSH293`aF*i-QZqUeb2Z0V*dZx+`RYY6BAP}o|v!>Zf<7Z+}ynG#)*k@zr4A5 z$F&m^Kk;Xqo4@(@o16K6-`xDf?`&@V@$YYL>VLSoseWT~bK|!+H}~SXcfMj`;+uHp z6(8T+{P>$DCcg5=o16dry%Q53c*(@X&;8!!<{|X|7jK-Hpg%G(arS?mn7IElo10f} zb0%Ae2WEl+Z9aVY#Kf=Q_rJk!hSB~c1m&UsvAMZ|-`~dHGnnN8{QW=h`CSmBd+_t} z%O)mt-2Z)C`zU_y#|-}lfBy<*^o{?vx%nluZ@p+@Vi)ca{C@rSO-vB9c@dub>A%_B zd@)+aX1j{>RObyYgjMx|dyv|1Y`x`ez`=Q_B-Dx=FI}!I_BvK~f1brMZ|8QyY`S+Qiz}>~4d+ZZ z?|6~_P~c~aZJA~5e#XNnOwu@ry?iKpQJcp}R1CZ@@Qb_`7J;81EMy@8op}Mxx>-CJ$Id9B zhgXDsI(&b9e0bUp*6h>OmEG~p8}F9O&7xkf`oY4%m2<{2AALav@xwE#AM|gqR_k}S z-`QI{6`dF^IQ6CV-~hj`?H%-A1PGU$wR|D>|Ht~~bKa`S(d9M=Zy{s_T8lAv*m*ULtrAuKtTJ+3e~a-wwHweZA-^tF6`J+DTs?p>y_ z_tm{N=Wc4iYPdORM)3qiDAs z8eOl^ZU=jbanRa5d?fn({?W?Oq!SD1bde%xIb?M@=GFax8i1{d`Bw?=KYKT;{MB1UgC<4-12n#Wr zCI#uXouMCQajs71wmG&7o;StPT-vmXWXbn~?#JDp|Hh9N)a`u6Xr9w%v$peT-cdW^c~c}hpIn^@d+y=iyd&B< z8h-5SliFK%C$0B>rMRho;$yeT<>Kt#=da$j)(gXxOP?E+w%|Kyq?JS&YkN%tbSn>jpL{_zWwOK_rGx=O6s!Fb$sWx z<>^;?U)q;yUtX(!mA>Q7pg6L;yuQ}I=VW|za9gLfzj$;&MHJ3Gn(h10Z2_4*aU^n= zjxPHDSwEIc&OH=TMexcnyG?zlDvQtDIh6SUa&`>kyQqlY($tk>N4Z?T`^2(mwdp4x z*mEw)w3FHUJ7>o>IXE=+_~xUL>CVl3#mhH3A$!kpdN@vZ2FD-m*X1W}7QvuWpS$C8 z;>+)o=~|XOz`D+0e@$76?zhDfUt4P3eysPPTOBWk&3^XjHmmJYtkHfsI<6S{s=>kq znXkH?^%ec-3F@5;vcYfO!u#XFk?NtPkGJjW9iRKjFJ;O2=hFABc6xCjKP#irqwAkl z&bsWVa`cq8G_M?gXk*d)O#8_4D7tqsJk>bJetGG}>h!Cvx%K(+?hR+}sJn8Cd;fb; z`_eR>U!2-+FH740$cOE4NKN&KwYJz*lBzvEZJu?s-qQSWYxORB?Rcfq5Ocz5vjhAemHnLN)7l=yQ=CZzK#AzG~B^??;k;-_rZH~FO(pceL z8H<%PF~W8pn( zlB~8rAl9OToy>~|P8yErWN#pcllO)rIa;|tPLBKWxb3r*sMU8DgLJ%iWxUpCcLxJo z?FVPX_^qeLN7&k4X_!aPJowqgyX$Mp;`rz(ad=r1YrXK!8YUzC8bO{%!n1o(GEz63ZnW%m^V-q%v>01aHxMC7wb)LW zl1RR>zDy2I=EoNMGFr^CVs+8#^oxk|fsyncBAleX4+e43{S=ud;j$*^dkaXJJbRlI~!X7+IyRCu-c0BBK^1QYrdbb{%I%4Wv_It;*Mw6c#oTdCIi?lua zMxIqG7LRvh=hn6G6Q|Z|N7AE_ChouHXM%Sf>Cm>3-s48~2i8uIh2W0LZ?9#0Tfvd# zp*mieo1HfrhbH4Y`#f@LGc%d?izB{rf68j^?)B)7NA}jkRdY}qrjHDyREl01*i&3ttE%S(=-TJCn}930Yg;$WM5D^+>z2Zidca>A1_gWVO(|ct5Sqtjs)`au|@VI)&N`QeWSi>)Io(YtlOlhK}&<{$Pp zRui=eyYvBGD7C?0WFC!0|7&BJ(U#Ra(eBZuh4sS$Up4Pqbv8y(Hg1vhP(eqQKDvKe z9CPo@PYr82nX6fu5_gJxc5%4V&N`KFcd*>E7dx?L&27-g?mp5Uje@*+{D^m#L!GEd z>|ovR2mPDg7p9eLbpKc#`0e>tM7o21INx)5^k7I%C+b3a_`rhZwksZKx8&~Ix*LLh zd~w_=mR85jZn7FiQ{ALE>V+eVgq_uFp_3iX@?=oY@2@F*c=TcE_OyPY%zryOR{6Q+;sk zcOJNBZJgaGh21~RCf2>rytVPa9XQtAP@JiAw~cT4WaDAui;LCuR=!%l70o(Y441`(l&$95z1T6dNSK}4tx6A(eXj{k=gOhoV5?NI;Z>T$;Lgi zCm%dLA_M1P1d7X3?eM4_hlZY1w6&_!UP?z@Q=DBo_Nm4l=UVQeTxquRl?|hQ^xQG; z{@M1Pvs%5gueD5e&vl!tf3V*@?OVp+Lt;_6RT-tDhjl-#dIbD%f4k zzU!a6`{c2iw|=}czjv{F?|~Y*$y%J+*URKBO#-CrS*uI8HJ6)7^z#EZS33up}~!_m}kF>ZKCYvEwgI8-q6@KSp%j53e5MV7S3!d=Np+aCLY z2*RVSLrQvhtr`Te$ve#G11LqI^?8`3@gbsGzR|KxcJPC-f6}ek?wT6}LddJXa0S=` zFniDMo0x8JuQ&P`Rm=H|Em^I@_e+3!z zIo$sVyz|B%z#Ks;{WkvoLG<+lc;~+aQS?e&n+47H4!q|tzrDFx#r^+;_94dcEnIs5 z?GAqqW5k?429mS>=;r3TasM5^y}9{c@LnJD_!fw)=!Y@>uWW8!k1@Xj@AuFzc+81c zf|%my-@*9ai@wxCH>0mlU=F{B_2Zbs zU*Op*@VAe-ejA_nVSEpw&)@sriHRlj`FH=mBPyT3g^Bcduxp@no{Ry=F1+o6N-X8S*8@Sd+n-^i6e}VVy!8(;>@h{_Zj^CfeJAM!E zxCih43C!mN+TDvjnz(-uV;*8Wi`dNr=<{nB!(ABrzek&IVGKW7N_Ke<@46g+8));} z7~^;0S|88-Ja+#B=>PXIhL7Oh82!Ht(*A$o`H!Rh*Dsu2ZOS5e+DYn(@pOF;XuKRfI zECiJP8VK+a2*VHJ_g_IUC_b0G{O>|A-wI*vL$Is(`85dDFF^Qihv|RMUm&vrq5M&_ zU&HVB;P;jPVRQ2bJ`d%D@UEckry#H`^zq{`2;YJr9z!2@K`FioLHq#-@*hEXUxRB4 z5Zsr-RNjL3eiaH+L0_k!VDEvTz8P)4f&PCB{T+tVlm_9OX#W-beI2g9AN?<4Ocd8o z;`45_Z{r z_yYPE;~m#x9RC9SMrbpK-(UXc6BGXtpBM1+Alghqp$_2PU%*&P*LD!U{}}K3zpzFx z!+ZW4KL0)1l=+hD(bv8B`yf8Q3v+oCzu${@dQl*GTVOmOyCgTivrJcJ#fE{ z*RUMNG+24F`rW3wlu?an-*tr5I7Q#Ws=Mi^+Sr)utR`gFqN*5 znZHQuBGdVd-P~!oEn5)BvrNVv-(=~brkAjY*y)Cr#g-jWnL2{G+;9U=^ZcM4Yl10$ z;6x^gx_F9MR^kw0Q4+Puv7hZnb(h3MFyEmp$UOZxH#rY9%O{4z9b!;ZQ$TL+n`$*j zF^LjUa!NJ%Y#4XI=tA!2<_H%E?|I<_5V z=h{iq927k@XI#7R(S{k5z+3GQa{Tp-7zAOWGpfMzu+O*Rfrp87*0^ zaE(dA{aS;8ard}KctQjv9x*jdQ5-^p%;jwoQNwSWrr2ZD4C%Yfi#=wUfhSFa3dgcS zKU?WZS(~0GZj`viq8BAHqKU&Ihqb(zCxMp}x8(@z=~~)X!LNQ`6TkF#Xm@h!HiHKW8m2Vv%|=fkOYFufDs>=7=CCZ8c~)c3GZQP81+I% zT|E?t=3{BqY#wrYN8R_9EoQe8DTaMo_Dt~t+HmEHL!3UP+=;8^KIy2gi&04svmEaNnC(4%QW0sugg15T)fosMpk(iNPQ- z7F3@*t$W>kUJHEVMseuvZ$p}}xWA^a*P2n?T&ZZ@8#CQWANepn_)8rb8*M*eN=7Vi zDznr@&x*+5%pLS}+ZVStTrh&3Os964^QN!}H>UEHQ+=LDZ$C?AsH=?=i+OCQ!vq{8 zpZ&_pOr}q22RdIiYqsvpyU79HZz>Jj*85pCxBrNAvVG*n_uext zl>M#!3zL<~d!*vhUcvT*$xfri531JA$R;@tP2?Xsx=$ngM=FP@3g30JLT1d!2(p9R zQzZ30cb@V{_{+rj8A39iW=N9f(h;G`Y~Obb!rg$tC^(+Oq_wMM)hw%OQ_V~B#B;Q^ zOB_P{j@N4viwob4OzB#RD=el`W-_Vl<&%`VE^^mi5XVqxo(6;CNOdN}aG9ww!p+FD z?(xlldl5Z+xaUV6%$;xvv0YUaQOlr#qzO+G%VNSLjwJ#&pe_WU2|@AOJo8gYbe&2b zQIkbn*PSZ%ns5-<&O*#Qler%8T%WnN;`ym9Lq;@*O4IV3yvt=ra>5r57_HqcVi~qW zA&OkFT8>-Qp+`aNd)9u!pbxGsdvH;fupAbcFpNUwx{Lmy5|O)GWQ^HO%ejxjQ1&;9 zVmKCX5<(=dFjP(IWx(bxnx;-jEPRk$uv@7uI%yEVgqO<2ZKmB?o7@xWaTvv(CQOepo_L&@ zGISdn)v(6`D(f>i2FJ-F&vhNkXMzW2kg$Mu$7z2Jd%=9^b68?2RLcfYllM~szSED} zBGomF(v88yhk_wb8wrvIGH~6NsTdB-hx9zBWqFZwZEmQxC)%iTa2tk6Jpu=Y`~X3% zh4g%f5n{Slt14^v5Z$%B1PaJZ(?KpF&g}x}0w^Gpbpv@+lB`J=#=r|a6E1>=uE!it zn>Je0%2b`w5qMfj1#^6s5>XVX&jT>6F8mQ8xf`Wv>V(3jPD`b5s$5f4R)znvgSfrG zY$4p`A}%5pL;=xEn}GvNq{eD>_y}V2x}PXn3RCFhkR{(%h*@c{d9CI8aF^5|riXEL zd>(TR-ZOStEMvAdX>iXfA&PBE7<6o66FLZo46Ff*&j5$)sv=F5E2gsBF%=I+h|WK|ry1{&P;R3GjGdJyhmiXPi|UP{5qingnEx8~YyliM2A1*<|59AAb) zjpK&TYKA^T_0UBwRc3O*XrbU!X*H&r+N`M0Xbpc|KC&_lVG->E^RCfQ@>Od2b@;ku zx#O>A*D6`gfEW^>%~U6Tu8m>|=K1mK|P2Kllz?687qNDJAbAEl1zm>}RIELu&K zCGbd+WgY_rxVB~6wyLGtVDgy5vdE$XAL}fL>0_}GpcT2AvpFQ9^P>|wTV5psimI>2n72J+`mOD1!afjq;QL1oaH^@x^S=9 zE5dM#V(Ekz!S#iB)^c34rCYjCG*X*3;GlvOQ!!BPVgydOXgNBj=}9alK$l@Pb{huu zps~7Z)5w8Rd$8#qtcRqPkhOf(kJ#yg_`(ys=tqp21sC}}o(#%3d5G3j*=WIuT zH8Zrvd|*2ZB5ZYnNJx9HOYb74u1vnyS~x|7m%--RGssN^BBx0e8Ucx?`W*&Qsh>-4 zh&4BKFNYIyqhM)Oj?d;2bq8)C=X){jc5KK@0561acsK==5m46ZjvdL&{<5p>X^5yo z?pQckqn^*ltL=|A5Xz-W3z-xj@lG~n9%_hw$6GseX6+ki*N?3ZOmg;D{lrruWP_24 z9(0q~TMk1QUzw%w&weAGvF*l)PPMRGZW1FP*X`8R>bYy!-P`65W|OI>_+E_iMkrRe zMvZ6$&mDzEe6NILN3`XoJ~82K0wZDAN(R^IS`O5EZ)3HhN)O5}{03l?!&W;jzT&ts z@T8)~KAa2DETF=e3g40lm;n!VsWBoH&eSk97;zlfSrN<`CQO|VP|#zht;H<52a0W* zkOu{Up8>T}P}0On%}wpQeApFg1$5dAHJSsX=){KCCPCH+At?mPnGz=^mSf6_9{YM| zc7zlW^Q|$nj_HW>!$jD1>7r$qGX*72Obg>?wS-V5nz2h$;2J_#U{GAgv85uyL?b?8 zBsfn`aUx+`u)(Dn-LEo^2{Ox%155!SR8ZP6V)zXMm{-J3U2QaVg}5Q{`iNR#NSVK` zdXWoPYeEX7B>{y>D}~TsY9!7BL`F_H#5yrkbr%+3QB4TDZBPrk3-lMkE(0hzt)>Ym zlE!iXp^fA)Ad4qLKavLMkd?@W>0l1;QJ_yLK|p{c2u#P{TUk^|H{^Dd!~Ii6ZL5>o z9yO_Gt2TE`-Q_Moj*Y3?l^}KCA3z;IcwwEm2tzO$K0rFcZF5fzP$(EVE_73;*fJZ_ zcZt{!%B9p0XZHeaF?dyXmX$pqiEhCG5thJsxMap~Y);ytO%)pfZouLwgtMf?K#0!) zP?k)h!g6syT~)WG2<=G-a}Rrj9kqeX1T#1#4q|J9Fxz%rpiWv9mKg&ThutJW=nGfS zD&2)|3W-uk9V_kHLUUONOWQ&b2@N3Q%yfih0~4aImS`HhgXcvSDLo?`fZ(O&R%f`~ z#;UM#jZ%;~)JqH!hNVl2fcxQC3{wZPU{+P?@1M6-Zt8*~aRf@O?fgLsqks&utvSwRg7hKCvhafq)6FBn_c*?rc%q$j)R%AE& zR!OetCgGr9TraRJt=w>CAWVeIg;hieHHa+MM5w74*e9pi%nzD&N}#V4ArY((qzW)X zkan*5XOdRj_Cw|Y2DeFBSmwq~fOU7_pG`l)0KgW)W=7P(esDM}!dfPGG~b!w>3-k} z+rVzkHBy8;r8Bl%AKD4ff}$M4(=co^fcMIb3Vh?Co>a9-2X-e|ia3}d!Pw&cp;}jU z!}Ldq4-x{F722ss(qS4F$`pt&r}ougO-1G@yY;>M7Z5@gMMg|U5M&&LG#QRXoLSc` zpwV_yAR^cqYlgYFivuU2cSrMWuiUeh3M%?BFy!AHRgLR}*lB$35Q z7lbkMbYL=&!a%NJ7~~v|5ZDJmu^0u^vWYvGe!m%eu~S1;kx%hr80wUHDYq?Y&ZATZ zwx}KOIEsNFfCV9;C5i^qKPO>;027pg22lb+2 z>9}j++(5aRuG?PDMak=$p>IGRK>ZmR2fS-GtXgNWt_9L{O;55^1Ay-ARr6E?{;Jhcra0)HcP!*_G0`gEob2X4s-Ypp?|T@h+l z0{%T5(>l;XDh*#&PvCTfBM@(?K0<+1dMHHBL#tM8weC?TXKjrHwW!mSaSQ;e*|9Ap zBSuAs^Q5{A=Y%8ND1-;HjR1Tv5mGO!P%PWi5%hR^gIg)ZX$*q$YD-6qYL}b_B#>3_ zwO|$yq`^v%g_aLq6xJVk0Ne3F#$ewl%mvGaz@)Cf5-$yl0I`jbNm$+v7Ff*`u&3a< zY0E9SBF_8-;S5t7z$z|=%uOT<7`B1~TETil$2AB35Ir-+gEK`0;(*WqK!(!>bb~2i zs4xu#f=4*)S*@X?win6}*~l=MkRT5+3qqi9DQLsafklDnL-dcDWs?_EToc)sJBhnY33H=44ofMo(}2&NI_0l*Y6A!0WM z(k5(j&tNga9Yn2g_bN8f4fHTGXljw3@DqywA4fp{Xb!sFBerL&v7<5#OoWzn>xEX= zNIK01O4uz+i#&axD<)$`5IKNuAe#My6Rg<-N2i{c4HF}AD&<0JY1zo+M~8-d8P(+ zfSFC;VTM6eWLXC}8No#BEVF=vJOqlp2v*e5G-inZrOr6^k5)cD{i%7N?#@rcdhX96400EsHFOEd#o%fTr6p|8tDKorFJb{Wz2xb5b z5om)22=QWwxu7+qRdARF#~46$dsG2>EQJb2w}^==i8N>@aiHEM^AU%57H1rg9ReO5 zC2V{I_*uZcr$F~G+_d0RtQ__PZ*rmhWo{Fs%wWciZP&!y zO2A2MI5K4S0BQ(sa;#3A3p6gN1QfY}FTo`YSHO+JSfwy`Hj@C}VAEk94Bd4RFu)^- zGGiBbf!#vZMS~RzF>oYR5R7I?kVW82N-8ArIXDs2Q2T)n)CQLE$z=kLd&YJEaO<&DlBi0$o8WV=?vI z1&K^WO|@`XFv!HohkP7Wn|=R+T?Dg5yQ0ymhI|QVHeRg5!uhQ|CQ`Z{V)Z~Uy_BXP z2}MAx1T={0H4xc(Afi17+K*rhbP<&nVhUemu4R^aWa)$^crwp!A+yo*cnc^A5le~c z95@FVSh;UXk_hlIh^bvQ6z#-LX*WB>_B25$x(x)aNW7X(?{FuxBQ|SwPUkbXYN6xf zc#&ewKsr)GK>H(!+H!6~s9d)?s8;5h(8TFIyDGaU0r8akv~@2B6KqzqE?DJ~eezYr z+bP9~G?z7jbM>}*z|>8zLUuKA2m+wBnUtuSSZrY0F{vV@?tdg8lRnM4GRY9K_#uqE zkNg5QpfYL7d*>Bh6*B07wYIP#z{y~$bb`F?9F|1A@4h|x@qRpv=Y($Os;wHn7B;Hf zad?9&<}?fp++mw`!P6W3fYn1>BDn<|4J-*V(?&9t)@B!ssr|?eT9%%-?99;2%|zqxi;iNqBu_1SUXKuL#(@>3It@{-t))Ta2gdZQGP%>Ow$ka@DuPeNg1Uu)XCX$B zej;#k4H>s=236P=Wys1~U_OERn{uWC(jbBp2u)2&9IDCK01_Xk%05Db9|q%TbzwLRkjwdR|2b&|aU#4ie8kQrepEZjQQ0L<4Ko zIt>%9LjgHeI97o4^V$8NuKY?@0^>P2ln@a&>%&{Cmb5+T*rMc6Ig-&RGKj2OGNgH0 zaObdwh{h3obxTX(!~V7a1~|v*(;nhxS^!>$fY)H@2#pC15#=W2&N4%bs33$oz@j8x z+_o9gIKb%%LU`Wkgz;<)h!2r~zjHMUv?0s^(nAtlW!I%CkZ{0u#VPj^GbRcSrg%df zDFY@bjh-~;u<9s~r#^y{0N9;^dX(U?fKAGLh;VW)Vi2ZKM;U|o@PXlwnnMT%+h&jj z-J!75plf{#a0mf<6J`uqLl6e0K*VnvcE?4+$!&7031ddml$wzXYqx}V88`6<`7UJ7&j~zIXpgjCrjFWFjsk;wCbR3FnEEGOcaqj*RmR@W{HRy zy2uNsI6=dcK>;a&7qRNFL7vevv?&!e4#0n58^s9B@ciy5&?QPAEUzgH<|2$T6lhA6 zqDW+fq82#)rmNdJR1$U+J_4ApL!yk^IPw8<74;0vkp&NW@XHR)LM$^Sk5yQA!vPy{ z-pRNn_J+fNo`cXfF+BhUD(il`Mo{aJVQxZV6KrI4hqlov>@xotf>!X``rNqL>}XAu zFdan#4gmLI!wR9oDOVLAiD&2%CVrH^YSL1J%n9#B>NFR`a(gHT0l!8~i8WSX=G|imHp-G556N&la**ixd;%Q=9<|&C zC>k4fEbEJI%1jc#n;=3!P;)!VpdwST%(1IhHKtSh1h6{za4(*VT^+Rn#5}{jo|Q;9 zKO9T2-|((2fx--(Yu?$m*-xN{QnaZ8_e$6Tk`9PJUE3=&(-f$D62UDLj=)K#z9d%c zJ6&7#;p&OswOc4Rm@|th3HAxe8dN9n5Qrsg1(2-(xB`x$ot6X|<`Vj8!ATPt52v=T zQQI>^Gfpk|+bP675O2C>`*lbgG4Gg0Ce=`kh1)S(q{gU%x0PBAlmk14L=N{LF{n~; zxp)MJyV>GuC+T;96G)r;Gc5|WmpFP`<26Z?!tokUBD*S1w(*AHgRC#9GjQvu=px{wBACi76ga&WBLD@^3_Tl3Nc4bG ziyB0U$;kJx<`#we4i}-~Ap{#lHTMxN3R%W^Jc$5XLPU4WTt>xya|fz zfbo^^F`{KQEZJPFlYxwyG!>S)P3cArgvByX9se}=HUg;T%zK#R zu7;((6?|C*!LcaQp}v=T*)R$MiKqoYrsVgLTSyS3BZeti9B5ew6*#AgFwNkW!LK4! z4H1QuV`l(0I8+}DO3Xk(Bb303A>j&39zq*Cpdr|dRp#F`U?&1P9vM*NC`g*20)@GO ztRp|wLh27hc1UZsTDKN}PElAXU2(=RkmnYDhEhnuK>S#SY3WXB&XKtUU*&ppSSZj? zV4zwQ`4j+fRBEP4gc?PAC2?yUm1z-*XBNs-2+WL$iZj$nwGj5A=mi8F!gyQ-=K%-? zUoCu?QEdQc(aqVv!{7{KO1+rIvm~6m(xnt zsb?!Sd^tjMVWmDfu-(9cgBiv)G|aI5Lz+RBdVnu?ks)GMgvZpCen4i5L%TB3Qqq1C zajb~$>VD+C){*%FMVPcKmGGcTKqkm|U}G>B*iHa-l$(@Z9-3B0hXPgr90-;e!~im0 z;4R9^AkgK2WT1=@IU}zN&|YTnz$2g!#F}vLXlj5ZhvCKp7!iyL{-Np&?gt5bQ0XWF zfD)7$F3>B;^_@@XgNK6IN75ZG)r0mxr=hu!R`5lU3XNf9l08E@o}z{#Rghu@+@S>4 z65~d^58naBgB(f$HWK_Wgd2r!h`q}U9(sboD)B6m&_bYm3juO@9g^mO{$~#8Hg|*i zOZ%Kf7T+Le?MD~q*IgZFA00!BMK4aoZ40g)>{jU`#auntrUSjXh7*tRVuR^-*FqiU z8_YV+_1bL7UT!0z&=)Gxy)QDYYkyWH+1>>))zp_e&Y9^jFpC};;9EV62Fi>HK~j@{ zpe4jE3+MHYX2}p+6!?4gp;~!BAQ-dtx)SMr_|WO{v}l2|d9{VrPG+>UlhGibYz{0E zPtW9cOh$?|nRo7eJsd&1YI^*TG_ewvH(ymX-cnAkuYK*1wKDw0Vbs1|ET!Yu960cn zgx(q|RcooDe8P7Lmm(^NMaA!d604x6miwi_g4^@f!{)g2=XUowm6G2 zaoaOIxD^?r;@m}+2=0ogevCpLRL<#Epur3|1cu8*Hr53xiE3C-uRgNA-38&95yD!8 z@?c3oyFp5Tj(XP8UKi4B7f4WnN)QyoMl%F%FhF3xA^afP@Qo&j(dOr+Xo?P$f;Cmt zoI^Gk5^$E|B@4nwdRj7Lx~Y$m(1eAz0hEIVQih;-VZc#PZ?Y-Z>mm^Ljr(jp)CPx! zA}o+ybxkc9gY$C9g5!pkiv`o+Nl{P7w0W!haE6#lhQJE|yOo@pwqH}$C*aBvPN2-2u~2nS*?MexZs^^KEOZ9W!8CkUg;8g2j{He()F{2( z5vEENiQ1?#@r;E$?ZVF@;NdEY@nJ0(LBtI!WN}A_iQ}lW7fUkeCx+(81*|v%*Z`a% z=(W;jB8GxFL*5aF2PtV#xCo#S>ZXo!3^b)Wx)iL%!K`b zQ3ki{(NO9t5_$**RiKN~`@r@3A}Ua20iqf6C?P= zG%|)`1&+{G$~q#o;%JcNCiCH$k1y&?j(w%gvP#+23~w00NvbxIvkLNhN~1O#Drya9 zVc?9oWsu|TvG;CoG0p0x)Elkuz)Z zo%?Xy#shJu*L3hGIHiqLHQcA-yY)3BfcRJ!OZSST5hAHZky3S#zn1$QAC+2E(E&Mu zJnySF_Rcq(3X-)*4^R*)9fEH%K&cJ5Xrihr>@rG`WDA?jI*H56n5ZQWE- z#Fz-}Q8n-H)HHm3OcyT6(Y2yERa5o4=R~MbVla_DRsj*ci=!eXSu{9OEF9QRaioDF z=b45yxKq_r8XayU{}Cd#1!;Tl_4k0&NY=uzSC~i&Am7=G!Mfv1mQZDHoe%iBYdA+? zE$GBAc#;9MYx&Y5hvrV}CLkjerHaIE5E!~M$MK~bb<|~ch&rv~Bv#Um!x?p+RFV@( zygoFVt@>1b=C-|AmKRHbMA9F_(=}~>`SnemFboI2igF@uhZ|JY8_hlX?7Mb@s7nRv zTEJBJ2=anGO{8eZ$ODUuL=pgFw|iv%=E*=?5G^*oIV&^^b!JIwq)01j0@cHid8Drf z_(DCF*dS7?9ECLo$|C1;(gO@=~pFC;-DLv$AkvObi>7rZ}6&pGjQoPL!Ni_!)ZAmhBu%UUVV*z?V4g_+A zb01)KQQ2F}I7~sPfoUQ?;`sQ=g8P<;IzkozHO4+NQnnd6uk1OPX&{Vg>^rKC4f{*T zxpAZx`CY!5r2+(XCq&8x9vsLRW)b!7NCqjY;yQZNX`iPwZGTLWWkn9fS}~FB0IQ25 z2g(eR*V4VijM5D8q1$l10t{`0dc5)+92Sk36oqUphyGdFbmqq?CQFO^br*>#N@0n(`Dl{{S_0BS?!I7=zdH~Fgytzv?4fQ!lZ4hMckrPCq zpumB^CaLXFq&XVP{IHMwh)mbV^G$@GK>HwMfukK>!5X5t5#MK*;3!5~mQ-dOhYEVe z;`K;mww6)NxxdOP4Zxd>!Gi;@fnrYaMP-l*>pqm)@lZ1fpl<+_0WJt%Bp3UA=lHuj zHj3sEp&{d4L5hqQ%VFmL=+rnqv5^iJ;WPcb#F1f)eb8Z%;MHN{8p!K%)4t09s|zFQ z^S+%?Lx*Wv405T1um$Ey6CZULk?FNuaBiMw@u@~=rz?N!uIwqR0o6+3j8l{x#;AXo z!Qm5cK@3?(qPD|6Gz;o33@MH^!0{A4iMlT{Rq_2K>HtYl#3hKlfo%cd685X^qLP9E z(la5H$W`qA%4r<_!wIyK{36$lyVcPD%Z$n6|HsGC`LNJg6LLsRe%vQ3%0FCDMHgJdtM}4^H1phB< z?-i{1ectzpaxQci<-#+zQ$~s-YeXj^DN~jebx0IMf&>ICu-Jb3IdAX(-g|$0IepLW z0t;Xlg(L{DNR%ZqG_B&1<#8{$$Y>^$WHO%2Qz4dzz2}$rsGOCdv-h( zIJ3Qpk|uk`DIcURnq$dT6w+95dS@-<)2oFkONX9d79! zs5ze}@c9W|RU2XSD9$ZkRi-W?Q%Hfy)C7BHpUOuPB`e-x5ZWj)3|k8U!y7}Qosxy2 zUvRGhbL4!a=mV+J&?HTK|MLFaeK3U?Mwp``GlS7(P16%-_ySB*redX*Ee*w@Qj0*W zSaZ5vhVh^cb`X^2cq$ALsoYgpLMur*4aN_LWox>{_M}FqIs!_uqKWouz;VQBj0G;E zZt-pXnA#n$5?v~dMpPo6bRv?EAVX+3(U8JC2=4_4mL2wP3WWiWlMWLz06-j?Vgcql z$z(7Gjh*D~W#7&@>sT2km#M(dl{P{X4-hRz9h0Zh$n9RIlXO%QIRug@3$el+OB(WQ zFFCwB5~vg?5jYgmBI;VvfYJVs$8wniG18^uIp$eF_)%aZYzAng14QZ*dO1Q7jnZElzhoY>7f(A^ahq85zvX~x8$WrN1Q)H=bikkxnK4Ve(C_|{1pirnZYzg4O_R+m{ zl5&z$-Fr)5yT~>iZ9B2!L`xSW+@<4la>c?aVpide3_OKK5zrNu7;!gR1enbj+>&<2 zgUOHMSrJ*Zv7qyn9WUd_oI6W^0=j|8ztGv;dAPE67gKph=b+C5vRHF?+66#*K>}=0 zp4nM9btkUVM?9%>EX8*RKp6#BNi5*n3?;$CmE6VsCdVfeN73wo^&pJT9+upD!MFoV z;V4eFNE3<10`-hQ)vHHQyK6Y0kXl$(xlb-177@Cq9u`oPp|}a)ri0{d*U&f>LVptm zwCy>7y6`mmvOjqnZUtt}H%i@9*?!m;Ss77gN7E%sJ>hY>)e6+zWeuJdBWxt_c@q)e z%L%{PPL!!=@v;sp43m0Pu18^-SSpP_^cNp6{kO0}nM!0KlBYC*8Ku`DlvikPD8At~ z>C4&pv1prh|t(hbcU0wX#3G=~k~6!#x<#Ot79 zf{_K{M7hYbiX+xbkuKp4S()*2j5_BTY@4%QagwT_G)Te5a2nXOxRs$Np}P_NC1b9n zinG`fXfdr>m6mVJC@lk=$DG8h6$qpWSq^T$<8}IA_vk^9J)(30)(7q2jRt6sDoSn^ zFEH8+eVR3rP0XqAbaZP8hD*`R2jOtmSK;aT(*oY-Epv~sw@mSpMk)5tX2~hZxLK&~71M4hN>n<`G0s5tc|h@Wn;ZbTQ@fU}LiEi(pmb%n{e%IUAc!fE9N+6}*n86I&sRRYAU8Fc%8V{fR?!kIpv zob+nyB>O~ycu5972wN*q2mV%2FNzz}4wBQKFc{&G8 zqD@Q%2q4Q|T?$gQnijTZnTXT{OGJkvxF*5G1E|pKvxre$AnF^yVIh$64UJP-u))al z#m!H30cQex$&p55j1mHtM#vT@AB439W*FWk8QMC1i&&1R9awKv)*$sfb@4l!hlSHp z$N*7{LHqCo`3Xc~aJB&!5$(Y<QMAm;!J0a3v}M+b_jWCh<7ZNk{FDMdQzCyG04Fo5eOhOlWKt0IE8VCjxT?@kp$RV!+f3Z#3Kh^(Uu0 z4=zn!RzMcZZUGs9{s%QfHfR zR4&7Ti73w(P`HmL%+EfId z&7sUDhjl5FshX+WHf|)1X5yqzT>Vs_q{1SEvRrI*R2$%be%}nOL$rf@sw`cbR1nH= zvF3P8qj0^IA~IbnnROV&8m?lqBjq)}uE1Xb-m?`Zif~i- zZ|$SS1yIgh1u__oKKK<-p}2|`4Z(v#@DxZ&ZP1mAmsKb-Mf}MmDHd5d3FLjdwL{&Fi3mxP zuy@NNO|^*w0S0xskZG=IPFIgHCpZ-?xUR?4Tu7?`#hp;?MExBKgV-&J{D(C6(?Th) zXyt*NQKaXr5tLRHHXNDWjb{o$>gh)&NDqVtG6;XIbw`LCV|`mKsCCL@wjLvzs;DC< z@G_)Altz;2+7(a@3L%S{hB!lGmZSE1y<9_DhYlZel@ z^eq*VP}6PHYA#KrXP`QwMn>x7#dCM^FtJk`AxBKYwU9;Rli&U5!5^zjgb78Z59?`K zX3NYq`Ur&`?>=xalpAmk!9e-6$R3%_?u&p4g`MR<*}AuNy=KSA>)miL%!UnUTBlG?YUDqO2CadejZ3rajcoEBJ$N{8u zLOpHEp{-ad8UqAZggS|F7vu8rq?~V%5h1k$r}pQ*HVm{Psz3}zmOG)O8lR-&%$z!N zLTms(B?@R1JcxEst064J$_d|%{*wcZ&CAus5sq_jT7nHwx@qV@*u3N)06fCq;je(6 zz?Tt~5Zt2QolN+6kx*f?LR}t?i$ca>2F6WIZqOXidvJ$vWpfiz%m(R>r|9~PygNfXVgrq0d``g2I| zK_QvC*8=p2%6Jc%p)RpejcXlrn|+gCbzs1v@0Ylk+*Gji^zpu$zH zIVKXRNgRJUgcz&{G2hHB-iK!`+P-ly6I2RVL<^3GQx@$9eXpkgcak4PYlP9(4i(ZU zD$dnlihn-DSpXpF#YdjC9Otue^G;DMLHqzn4T_hIDp|8sCsb3S`fS}doODOlaS3N@@hC9Ir_l;1#0S( zn}Pp=c5>GGIeW^xMqJ=81S}LL>o?HMa&5shP%Yrdw)9rdLZ|LU@-V~KEuh^6GW z(f#EC+B)1A9!R@5@}TaK4kKK!HQ*q=Pwvg<&|Zp{R?2UVaOM@cr|oB4^enlMu@vfp9rc)$`dCJCr2Lf_~98?Oqeno>dlE@f7(v z(lp0zP`MD~#dqbKu|o_4@D3`GCKVTfa47J}RKV*hdM8i$H}Gp92LT&2T`YCxHgk6jFohJnGO@AQK{htMDSI)?gG^ z!fc1ndNW-;A}^nJG0)vu27=t6SwPZMkX}9`8;FQ1BBwHhSrGamLL=@VdzuwfItQ3x z{B=GxFdYw}7Va=gG)3waiV(uE)N;Z84vks|@h6O|pdsC%7aw+N*X>rHs4zlLs8cPz zgO01q%iMW2A5$x;m`O@1L)|41iwB0ghGt{52<>S&9>~gWru*e|maRou2R7~)s>CV6 zUCZzJYjT`uYfsxA0T+fcRU`bar`-avLJbi)?{3gVBV3Bn!OGOcyjVm~e5BNq!u}M* zScb!(hm|#Twp$2vRqR?;Qf=lOZN=3YCkuQw-g?qDJPuHyk4m#yq|nT&Fqj-nC`?(2EMPw#+9;`Ml!o3pz)RLHvu3Fy%$Tx)JrATrVewdDtZpR@j-8Gc5tFv2Sq`O1s+j2#W0Iyz{lSzBtZceG+mzk9 z+Sjd~n?8R3KZ=a^zJguM$Cr;alD}l{7Nfzf9d^*th zG;k#W-i3x?!~gb2oBNA2hm!!Q6+#A9IQ=^vg}8zR6h%KO#uQ){ps57>hXchg!)Hc# zH=hkI2{@n8K%=>*aTCyfmJNqmx8COy#`4cMp<-cy65fM=h#r-$k84iIf{~C?txxe& zg71NGVxR%G#M}_k@^KT>CuF3wm(&#wY_q^IRE}L6x-GznFfi)15r6?x6Y}FqG`t?4 zk&YH~HZ6l0MJbiGCGU?!ZWomZ=LYI2G7FdV;nelWD|m&-7P9Od4{!<-0nP>qirpAa z)!M30g=P!{MgPWE@93&Sig=+#~2%Q6aY&P#?$7ak(BAj9Dsc^Nh zpV<1@On9CU!LpM`&U;5vE)7*qia|LusE+_dK_IzyIe>YHLTwlKVz>_K0~^+d%p{Pl zeo+9}Tk%*=xp7H}8!PG33`@#?Th=gml!iPrA@4Q%hDwtwlAD709e>gZ#--djX?EvI zKS<}SU|Px(eQ=hIDw`k;oN)$bXC*6olT<1SDqp&S6;?_*90Bwoxl%WwF}97Au}`RC zt+2T`uO;V7a!g0--XK-MRR(JPAM`Ao^JoL!>IYqSJ~l!H%r-5w%6giVI{k0lzir|! zGSEB$r{e7pJD;LV$-}wyhkD9d8-&BVngfBkOfT&#_k;j24U3M~^75)9&`336IHw6P*#5=s$2 zG{`O280QKiIXX{7<~eFx@(g1I0~$76VJ*UZPQFDpv?==C@x&Hsqal7KYkI~00Mq1P zfi2;f1&M{hNf58QMGT(+%DiI+uQ!*vhxo{!mZ zLdr_B!zW?j9wF`kt+zDrzi&yhMLtycF^yEgppw$z*1(DyrI(c2r5cj%(#4J%u41)@ znUfeFFPl@bg?2ym-V6+xyeN)fgq0;FruhYt1OWGG^+nhU*F$)YR7oUcOvV=YY(&1@ zmsA3NQD&gp(YfcPade$R8}6&{m-BQwaYoiGk0jM}I`-f>1Px#_fv!?); zBex?mopM-<2Eturi`2ORX}&<+sMb7@c?0?uYQvGZ*Ux#5TmsY_1dos_Tf}VyPJqLe zy$>tK#gIgOjK2a=)qyG_GJloL@@ApdkT(cn`b4vUjwl3kQe%9go`NBpj;uF+0rV7x z_8OcE{Dse+Ul37IJUn6;4|I~H_90OLe5_>U$Drhu`NRuOa9Mzi%T^`fR3VXxt_|yN zevgeJiO4Q$evs-wr5TU&jz5mje}bvbvq50y1xi}b-H zJrW|>=OfQ65_^ee<9-3f=y@P}#q?K0GJP6@5)B{CYRsC*%uP7rhe^Pxi{2sD`hd?naYwF?XrcrNoFqUz{nkzT#lWU?ZY- z`8%n0z{tsDh{O=^R42)p4-4Nu6`a73QdwB5fC6!H(BqP{VJF4*j3*G~1AVRd%VALH z)~2E`jYdH<%t8rVg5)V7hvapjc%WtulBu)dPjZ1}aR=><8?2~&c)As-wEVlUq>iVQ z7|Y}gr>6Nx0;YFO0;_;+3kO*qL=@E+i$tMpfC&nV-UIMe)X5ZhP&e){jTBnhIAB=n z-G(z{tR%Tc*2+25nxKPz+yEfDv;&9}^L5O_=ASWUqR0&+94Gue=#x z9VK6M%6?1zw2Kf{>svl7sO`}7eFB%HVKGrL*j}_O1Ozh2_XTrL_|@1 z`Usf-mnuK5B24luz0;x;ir6czU)L{6W~}- zv1l1RMJa06f>9guYpw`AFH4}a`UuG?ZB%HH2OY;M-ti;V(s9M1?)C`UW%sx=A()?F zda&x)DAXbb$O0M4qfjTe2`ZY8o=0hs(wB9Fye>?NaN|?bjxlp717ye^Y^yw>u0kf7 zW*y;=E-GkQAqow_+Y3zQ8S2w7qCf)T9^fZ8NAF3GI&XkHG!)KYNm zK~O?mSYWb}K9dbsr;<%Jenv%xJ_w;KtU2710N^NiqY0-qy)*i)i3lXLAX;cGZS*Kn z7u^z^B=db4tAK1GmZ2`;s6Rz26N${bmVLnsmUoMG29YBH;@sZHrp3N4L}Guh+#86iSmw{^O^7& z<_PT#co0{-kPAepsi@ZlcqFOw_egQ04m7<#tMIbMnWJ@`B|`XltRSaAa24t4BOrFl z#xPWb4I<;}7l~i=5K%BElYv2I)u<(T6YNZG_ls~j$cfEVLh29|#5g=HLJ1YCH^Gk! zGHH@c&9}lgM}H3MhE)j2jDoE4w9>_BIe|LWAfmWQ2Gmv8SyP}9L>r{U@*vtw0Fik7 z+|{5;L{QQgH66-FB5){gsNHEy==4!2s=De8=&>Y%BgnE7^dUFsrJ%i_+-N#M*@R(N zEk&3h$zMYtU^@D^=FGC?nrd*iq%@#F1T!Qg#eg5L*3)u9Qi4H>kPg?Da* zA~Hsnk1-G%c$t{sbOU{&7<_oxLqm{tWEfDi2wDbYd3g?M2U6^)ZfPZtZ$=KVEY- zuoL?T+++oary#=Feuk>CO-Uw7Y49UV#z<#vmu_OpJ0i#KE zOv)p%%%CpMLg5mk5jOep;j);+Nx_LBkHx)8*+A_`)l2@?vsO0+hZp4nepYG_gvV%a zMSq6RdloWr5L(>CC$SK*w8)WklECR;i1p`C7w`!gT^uRwd+HGOw_VOWq8Ny00d6KL zn*#(FlQ*+5Ta(0##Es8}E<3~s3H3cLb<7-7s!ZM$D<7waczNVQ)B2eJ#Qw9cfdCHS z{|FGV;AEkDLy|QIr_{UB5Y&xB~mVg$-!6vZ#0D!!s=HM+m@M}X{dn8z>>)?;TXiL%HE=f7x42? zrn?Mfy+J zAHEJvQf@g&;X&*%>pdfb)}Z<+=iTXSz!dJ(e5X{8W9$;g=IW7|pMjiB3ZXoZlev+f zx!y!hlA0S0;2NfMG%AWXVI7H%!I0Q}1xJJJGLfKfmSoCe?k*C7kwB5{XtYLMl~DV#*-ToLSTi^&&Yg@u7I7k5Vf;Wf+ry@mbEF99pK|2#_ zwRW%G*s1Rv9!a{YT@b+N3?-}0Nn)5IC7@7K;SfkfpMX#yFrf&Em{h`__9?PmU?gY> zO&t)#n^mCwMJ65fJDsCZTOmj#!k$FMMb$+&q5>}=M#5Kwj|371>(`KT(Tw;VdQ9Be?KGSo!KX^*awZOP+pV6q!raVI3hTQry99imgKJx&Lu1I+X=K_CNhb3wpDrI2fb85kS3-n zvD~p14pA1y9ibc8l<~AEm*wf|e0qFJ_Yx4n?XiK<;HdJWkvG(x<3dvcinh^HYOQp2 zUfhrK)neu7c5q~283Ij>{>=hG3xJ(_AJLaNU*mzRrl(|hBCm+tu?QFTJN#NW%Ol|h zK!O@ZJvKc75EFZ~P8re;UIg0DjgV>)ETM#H$V*f?=q?1s01LxaO3`=3<}5`j!yGS3 zChTqu(=bsVfEFS4wJHlqOGEDvLms6#0aC}zA!gZ`LasT67~oom^TvIrv6;{+V%y{B zgr=qJBm^uBnI-};iDu zgDA^5RtSd?rGhZ6ii8hnbclUwEE)$c3o7~2Re1Hy}Y6&PB0`8h4f=YcuJ+?kV+jM0Z9gcK-Y z8bqNFZYi#Ajwy~KL&yD1odu0APC>wD4jE__Mq)&hjOY=_MG>8Ck^*NVZQ;l=Cpm)l zDHAgO5Ir4D5TUwg_!*~5Ejo=h#B4)B(;wzsar*DD>LR#{@+QhUF~=v|M0n+?AH46E zGSC31r)nmHV0G-Ae(TL(U6Cx2%cumXrULyvT2CrJKf2U(B9wiF}$}Uv=(a@{W2^o8+S5z0T6HYwaJ^r;EJ_7}hybzOFW`rr?WI^Z2cSRpBrfIn3 zb^~?9d9itWDyDX+S6vOgz!GLuO+Mw;aa{%q7y*?6O!aZH0E|t2q`n z8>zz@j3V`VsyPMJGD~F4GvWq|c~YLrSw6d@;&j`95QI6BKangq3@2EfU!G3KgybO| zw?c}pX;fCRna@T&v)4A0#)!%x_@5T($yx#1duv%DOlEtaI~pJ}LwS*z6Pj$uE9O*$ z6A&XIvc_zm&6K%kLwGl;|^qvIi0Za$-pDVhx8T38OO<_V$!c+( zaLs2!1wI6Qh>3QY8YM?t;W+8}G=A$?)YgnJQ`&UUmgE@&vI>Z}U8H9jgWRY!1BFfK zc#Ml8ubsk$$W|m#9HyIReKRirv$7D0aXjV-JP;`-Hflb>4C|`QxIi&wsRvPk^Te;D zG2jRhNzqV;>Bcz`>eKO(A@ihvi`s^(NrbluQcWEF^uGMXuZ0Xw{906=;5%RpoV#31 zh!**pk`O&QS;zwZ74(Tv!;_s0N+z^+0BC>%oSXb%AS*$U^K1v&Admhw>)4 zPskXgCl)`9Z%S>5++3v3iDH0vFN6SWcis~yDb^vB{^EP^OFAr0V2n`?Js&q6ulz4? zDRWRB(~md!j`J1@50_bZYW#q@BD8yH(lG?D*Vq_?PzFL z$sR0G(l*o9;w#(EqY8~NOt7rCgK}#$-B;Ee!PET_ejIB42*(qEE0ZQP#&9ILgg)Ay z(D5SwS~6x#Cy(txi4_>-WY?W-MQJxa(+aab3p?y?+nj5|d=|_D=OKivZa=UO-%`7e z3TL>i@IqOk=73f~Q5d%e@-IWx}lNk}wfzmVPBtTNUnDzo)8R^m^Of@ha$iGsl zOT@fWQ~>zH!Wo~xakhSl3d3vH`!X{S<8bwW^MZLSfO3{x>zO#TS3FX3bA*l^1vuD~ z&Ep2H)5?a zgBd1w8Ofv&8B=eG^gdWH9J9osourhwB*ua=;7LqK_N4{aB+3mH4>YeCCjeo7oZ^}u z^i2)Vb3Dd9LXeIm6M7-$IY8-5mWZGr<;C~>BD5^?MR4{%VTsXrhZ5~}MOTB0wNw7Rvjh^uuL^_B$Bs_IG z9AOJ$W+RfrDsA3qlAbwU&xBKed)jp#zCoC$d$i91E@oEJ1mFMinVxFBW*>j>e4Wf7 zEul96c?G=6Ehav2d8i4@4DB5jIPL>LiJpdqVLF-Wl8?eS@dmbu2oQjWHdADC^w7Q1h|xqbk!_SE%4r%d@gy@bfp#$t8}+RtyP=ii zYBDo|36=3?yjl^0!4!;8!~ZQVY80nn;goK9w8+t`#Z@+)*(KzYgKrN83GM}>D<@Ad z&CLl_H1w};No9&Y3NjrwllFPKu}{vl<-#3bo-7}%#uwme1WW;~23YK1;>yw?DHqL= zsu4BqR~SL*Yu&i-l8ag}-f4rqgcC*yl;%t&f>^wilECwPgSA|TR-iS9H`A%z4SLHa z!(Pi}*4=SN!m#PSJHDTn2(bOK5l0g0L*ny^`(w9jSv5fi0$N8U&p}5)eXw&+mV`qs zqdiv4@oLlid}DmFj+IQkh_gEt|F?*GFB}_t(qO2QBLJJe!I6wI>6k^%c0yU zQD|SC=>vtm%zP@M6d8M09FLC?hM`y?-IxyzUk}02Y!B2+(bP~hbcIgDL;#8{20%Iu z=u1TO12bIA0UxUb3kSk6x>Lg#9SxD5Jsq1+eWg( zK?iB`j1Ca4J%of@HFS+IU7*FJtfhdmzoJ_OxQbwq42E+ zM+!+V#T1Dzw7Xcq5O|bcfCs|ANh1zGPA(IwU)(J~HQbbV3Sq?vTQwZ)04M_au(A^5 zLIf?~o`T_GL9u9fY0!}dKhzqR*nBEx3NY@IoC#Fj$?AB@lo@mbc$>HaFqn*&x!xh% zRzTi@;U}{~WriCzNE@b%&=|U_h&dLHJDXPH2}CP^hVV>Hj#uPBQLGjdx7^HdBiL<5 zY;71{Mi#h2c4Rcai7onuT{{xwRhAS<~}XF`rTQ z(hbi~fD}0tMj{_dMQ0sG?=f9zU@gM*iY_(+i_4=8gI>w$HecaLZBT6Z+BB+!FTfnS zB@$&ZUk!gQtVfp1vZfhR^s*cbjYE=F<(-BYK8wrzLD#-B7nH=?Q-%e*_sj5JV4Y*} z;_wY^oJT<1RNqNFl#RySj(phV$X^}|B2|}%(f9(>VgkoG`QtIN{eEj8)kkv6iDhSq zUz5ltJEWY(9x@@e$e;@(j5(`5#tUdo$NmX%JK#=0IcJoGS+UHbJ-P=-4S`K!A4S{m zeZi-s8AsW~%Fx6h@Jp&0a1KN091KgukjtbiXYP!58eE*8&et;*whc;3O^h>_)K(JG zx|(rBvjPFg`2y0$r-d!mW?YEBamQmC2-aO*Jj7L1;$txwE&xe@P`pha8`;>g+%a?V z;#5!DuwIzzz<%QC8ksm+be3eO4z!`vH-<>C2q2)EGes!iIf4K@S^lu#kU~K!^xYsJ zbft(+ZG;kJbc(62BD~x|eJAF=h4!hqlN2U}DpWaoKxXW)h^CQ34K+eQNNjeB;9M8r zh#MY=dX%0RrW3`17`RJ7FmN*hh&mP={&alT&u11I8)lvI-v{W8bNn zpyy0_b{x6rG0065c3Dmc6|@N~j9M2@@n2}4_(4eJfhJ?2Qvv1_iO)1n=netJF`qz4 z!j=I)iexIfW3mkpRe;CgjWQ8+@?Ox0hz>VjBUVIR@$KVzc zfzW_U99p29z+rr}gid010xyC927z~1X?8YF7)$Ifa{a*~STkF|IdC>6OOvQw&RVKb zC|%(;GiB0lR47btT;T^1w;WHR5M4iagG7J^jWkrqK=xGGL9S_ZDKdF?d>_&e2>oCh zH=^Mx(#AsZrwTuV5bM}oQj8fjK`#sVja|+TI!D^}LD9qU!gYB@l*an%(u=+`E|&m| zfI3`N8hAfa^T`;2TCg6wfyxfYV2qFjeTt(oaAXlX)j~XUO>S$?691VjN_8{LK1V?88*v~R;i4dp80ZIphVV@^j z?<xQiicI~5bXm-I4(7b#3Lw}xVPdQ_(dgF1~2NfgK=H>3|ByPaH) zg_aK;GfSCr&yuN}(Kd4)Fx>OtU#o&8!t_SRj=X~Y3NttzoRgG2Y?)k^@%ZZyW7zeg zbPjC4qzB;Mli~2fVEhs28_IneOh-esK&eZZ0^1T2WI|*kCt3zqC%`Gjpr2UDj8p}r zPr^g3hBo=@SmtwuYfdPuDXY^F#{?s1qP&JFj){#*uVj!!RGy+o;Y_m{B1#4`Dl?f$nj^jgJeFd{0#pxAcr|gq72SI|Q8n@Z zeL}6ss>Iak3} zv?HQG@sbErDC0h%e~(i8Gqi0MT%F%S_(M8>st$S@&>j}TOGs9^7scZr^1 z&83sM&a4ma5#ADt1!_uiKJd=5w&l5Q$g#h&&elkSSySOD+XDY8dneU#S}>ELBo@X^ zdk0zbxzR+S8cf!b%KT@eR6&o}7>rj*@z`@Hpd|0ouwrd6JjR6Pl;EM+VuwY(oEf*> z9Agf3KCtfzAgK**Dg z!qhpXIK*Vtk;JsLR-!a167s9J*XLimUmZ^&z~Eh=4cN`7Z3MoFdKb?C<$;dIh>9My zBUUc_Hu#1alq6=(5r#NF9UB;q3>Ob1)oQt>zeg=43ME3Y881OfUNj|mD@~{Nn8Ap^ zl%S(TLnlrTMFm_con+~0j5A4`)hZy=gmVf0uY;Tt6595VhoequDX|Q82wcy5=jRi4 z0vXQWP!2NHk&$P>;nB0`DSDw`=FMrv1%ww?yTGF8ebDYj#Hm3UP*3xpR5>29wn4oC zP@x(~r*y{d5#ewFCI@C+aZQUTPh>uz;OOUR^e}0mYo#H9cxQ*VxsaVQwAfgDL?T85 zJqTAViOZk$XL|qgcwf*!}9=12ago5_X4y6@{Q{hTPgvT3=f-_ zbfbN#s`lq(K~!>{?2~E^hC+G@hCYG&B!CJ*q~CRv1JD%6?3j)S#7P@ao1q=@@TE@_ zsD9)bAw$jR99C6UkV3#a>)gpM z3sI=<)!uw}Xs|0CYMoyu=958F!izV92Z_!mM#87_u?~ zOGwe+BE{&N@vBQ}GOC?yrm@K2QNnNmb{9)Ycn{bnFbz#UvU|D>p&ZO;t^h4~)4agM zUo3*J4CNUkPm_W>1O64~8mADY3?Wso5F^3TQS61#Sm8eRk5xT?2IupV1mS|+6hTm! zC^g48IJCQD4|~7-Wbk(`~pgD^|Rt84i+jiwru4s89qwuj!MSY)U>9k zYKI$S00yl#g?OKnn0Ad4&wRf}%0Q7P>s>XGM@T2cv@%4KTJ9eZDnksAN>d(Hs$DOd zdmj0!V$SR|rU6GC)c<`cVxFe#rH2J6qsbzgoap#s#uvWx2j@r@VIYQI;{r=`01c5I zPDL2U7dVqsq5+IsG7gn5jRRWhBKv?piotQ$MEK+-%Gs~Xo6{Wt&w_x*??VO z&nrX-O?$STPkMZG0;J@5(Uq~Nsoq6-Ji#GK?~gAx#vx=L`_lbXp726BKNyOW=0mBx z70nw+*=aRIb~#eq<6(6n1Mga7p;Cg_Vup)o(nN1eX9I=`7sbAz??FF+Y>?9-_E0O4 zQLe#%Awj9oOZe8((Ms@dni4#~cq^fy!Gw_Cq*XmF2J>;I%NwOtsDq Lm+eAcMpk z6SYONpH8ImL=R^LF(C0EC(ov0L84znFPzVyFfDoo5y&sx&a-OBM93*YnaEtRZ-r~E zxlZ;|yD9vj=+|j&^xW8^gAr35xsnj}Ta`(?HD31uEwPwSXA=>f&M~}j5}=nQ?ckY` znviB_Zg;HRK(iE^l500;e8OVtpI(kxqtVb!8W>2t0zVV{tikn)1+AjavqoMp7xU@k zSv)xr5-)q`P!>880(%l*2s0+5KC$jPvBJ0H3KdTgTPp> z;<^NsF_eTJj?0-ZV|06{;;+Ps!3RQVgY9&VF&@H|Tks0J4!GC$m~{?xi2RK{3gEP7W_sLE7zd~ic}1b6;wwZd=xA+0i->SF4`5ZCjXv`> z`33OoHLMH>UnAn`1XHF)GMp_;&NGk5vtpEPk#I^eBEsQyBBiB=*byU+h_jGY1j1a^ zNz@T4k;&Sbf1U1uuo?7$C{M)vOnlb_+7lrFSB1nB>kOYR>m2k!Z>3=516kd|ypp@HT?BLCxBDx5gmCp_!`d2fTWsw<$;9QdZfyHPkioV5QBapH;s^Q z)8?ubRwhz}o`t4QcmjdX&4Wxz5(#t-hCmDX4Z0fCt{P8cR{H6bkCz;LsEJShXFvMG zpZvrRJ@;2%IcP~+Tc7!v|Mt^c`oW;P)$7(fqfh>gAN|CC`5%pMD8KaFkN&_H4EJ_g z2cNvopZwvIPdxYHm%6+4)w2G5AKoZg&mT~slRI3VOI`Lk_ zo!ZF1wTB%pm!|m(aeA4qflN;ZCqLOAF4uq0&Xhx2TmCEg7gt?>!YG`BHx7A}J#c>* zv?yA8khC)Cwc4D$F8fIR2xuCTtVqBX@-wv?GkXdglnqQy7QRQ7%SPwcMS?-3Ak?Bl zwaUc>P!I(}A~~3$L>V>2<4P@|Wa;j2f9?-||8aHm@N@6pY8*^wg*n`7d@%AKKb|J_ z+c%s>wf7`Shnw9uHtDhLjsMwUr38|-?7lop-yYm(^tUz#FPS59{H~jqFZ#*odY8SIwg!bepTxbVf5K$(uU-_-9i}HWX}sOqe`D>QRM{7U;JmnFl=rW~ zZ>az7=g-et6SDiOU;41v{B(Ho`%g;k>L0ED(EVH%+;^5Q9=oF_ivGgI+4$9)u zMAgM;_;OqR@Z_FZ?_4PhXO(4CgGIdemb2N+)>lW&P9snEVzpd^--Ytpdwj=xYTKum z-`GxyH`C>V+N-sr@4WCpeU}L$=Wm6VrFCcZ_S~w6&BwE<@jXnyuiVel?F1dd{Edd~ zGPdh5F<+N=&i`dc`js;NCzsOBo0a_G`TWA!S6`VglFjw7aqnq-?cz^&?{Ho!VYAj$ zzIp4rRebwRBCQsmf4Hu{P`M9|zo&h&_SKrEd_CCP44!VrPcHID?etO0JwL8a@85qh zI4j-u{mbQB2XD=f-?GME52qK0$(L^pdd?rb^`(nJ|4Fj^VERgXl6haUKD~JVW97Tk z?x=C~2cxDIU9~6n8~%;iTaHJ{MQu#feesiTx$;<+b@e|ceZ=$s*K2lG?rV=sYizCD zHYmc@*!ZfLb^CxZo0Dr*>-GK*wV6amzUddYH&O@F-2dYFgGO@R^Tz3qyFb+l8`YOg z>9Q(bI*>Bi?be5l|Iu$qi0q9)6WF?df{=o)x zTzu}hs{UGHoj>SaRg>(buifGe4J6HM4=rdW1U;iiH z7=HL49_P`&A5Ncq@oNv3?dDVE9k&*K{I7mXileus({~>Jli*(cci(#D`F2C=55W{q zKgd#P@M!p+M4Hr_C&#__ifHdvF&TH-9177%hQ*yJF~mrwt`!@ZP*Xj(l7M{9C{I>U+tT_WSM7yLp*?wwLOEdVERD`_tY!{>tTlA0Gd2 z4~{OZMYw3nR=H^NTkW!T>?O&Krv)KnV%zF|Ch{++q? zt>5$I!<*jG>%%uj`urzv{p@?u?&|%|+aC;`ge_CSkz{@<^j;k|mwx=yyU~;NKw1w^ zqDT1Po3Pu_Hmi>R`HM(9 zXgv7wz2NchZh7~#5AvdV^Z8REt#<-aWE+Ngr&nqlt>^#inlW#KQ^k%`t(69 zxa{|LtsDE^xc7TKjp$Nx?u;gRV|w%8(fBn~GXF+y zEI*RcKN-9`evjs4((>-S?fhGFee*;!K6?K5B;DL}o!P;!KE0gRcOD;jU+jjH?|gK% zm^-z=>)7*hVHe3mk|Hs_I~ADCn8>>7)NEd4uXrD2$GU`EaAx?O;px}M(Oo;b*~XBj zyPa7?^VjB|BqP+5^uzdpZQKptG^Qqga#uP~+D&WOeuakN!K#Yy+apgooZI(WA9ZF( zMChAZOl@gT^S2Kxw_u7FS?0Xma{RjF}=KLt5 z!&qyeJ(h>Y+x~(LtlFbyNn*XkM^6Q56FEsxrrrPY+ zhK?}|EaldX*LItI=%4KEv_(xd+_ihfccQz$c)1y_&4Z5<|7meK`UcT2NS0O|gNT~$ z`Q={H>iDnUO3#v$4k2ES6xZ%&Z!xatcCT;#qmG(>bH3k9SCg{1x*TGXO9$&B`6Gzy zGg=+`H&MC%WGQKL5M4d%=A?hG%I>9l;_KEHPIm3KdXG0x7ccSZ*vwDb1zrSs0?jjVrMe>?LjTFZ4X zLs6r6N!R*~=)MtLnqLHC)S~Bh=Z@r#s^N#%`_>z+FK*r$y<|TBxT+5BwLfmS^M~Gk z?`MBa`apic+%EOUUsK$FOXBY1Rr>Ktcg98Y`N6A7F#pQee_4Ir+TZ$gJ8|FteE-w$ zZpPA$So`#g!R5++!@mEd4CnVhcl=w|PP=jUePy?LQ*9j?!^@5LwEXn)!Q?e#`e60P zI}hf@eQ7-MFFRl9eEG1m@APVQ*&g`S)1AMWxmve(Vc{g!cCHzYblBfmFHX!i2f2Db zog9Rh_K#ONmh8d)+>35sUyX(HqjTChHq!$FAlef7Ime@JuQmT^ue09!u-7Uk+Hi6G zIB2SJ!@WC@5B5W65Lz`#L^0;on_(FC9dyVv+)z*O5fJ7e(4Jc^fe5JPfsx>a3Bmzw z@rL})GMd8dOqdcp^Z*nYL}8CkpFRMJUHEcD{h6NIJ%7JnbVUTX zX4I+;mh&;t7swDGTEEbZ5g}kq3XNslvZIAVz9fkgPPed!z$fT8HNEZ(`}jSedk5rT zZK3uCeIY4S8?2gS>jEuW4>~XgP2jCI?j#LEvOmnci6)ID87Ftm{To+_)4lqB{y}w8 zkAqkH*QGzYEy)lkgmT!|ZmiB0H*PYf`N7q%EnyOW1&@6E@$;W4KX4j?Iet&|?)=6< zHR{aAud1Ws`o*mJ_Y?n>+l$ppsr#94`q6jd{wFJ*DRWe{Y`XV)_}ZgZI{N5)w_Eb+ z%gMKxdG(M;-Q4T+HX~y&T`W2m=UVn|aQWTi&)vA~y*v5ZxEW2yFZ{x;+1@NJiq_-& zcOINipN#hG^5eH&ydtjQ>Hg|!c<&cqer)f$Yk9dXZCUCU&K5PT0Xn7_OYh`fO&jN{ zUmY*kb^o6EcaEQ)#Se4otm?eIZasHBSHIst56r(kQ11V& zN4wL5!+Sf7g8J&cqw6o;FS1U3B1wYN_94duO?Z=V5JU;BCY?)>1| zJHy`YoB#ZYcJq%-|CPJaqv415F2YW^E&nqP&i&eVNlgj&zT|w}oV<2^R#nqGaU&QS zGvj4qc)^C5mf;>WcjEqDHR((!Q((fbF8prq{%L3N=UR7PnsI%fbfVvJ4&C7~u(p|W zlb`$ke$3FxuwZ=nt;y{AB>V>Z=?&&AW=7|Ks9yV_e)>OalUKJ!+DGSMb6l+6OebH~ z-Y!1*cLm0xKKH5rS2$y5fBy%Okk1Ox;(z}?zW2+|ee(a)AN>KS<3D}*6Tk46Kpmg? zx&IsL_zCfnT7L=B!B_kM(((S)4-25<$A09m{K#MVOQ8KUn$bs6?Uml) zo!x_GUDF4>_I~ZgonEiE)ovZ!tQ~cjklO4tb{b~?aL~SeZBMSZI(qN+3p_bEpjVrGD+OS=+~b+N#~%+ugmnSFhJvJG);v zJo)GD;hw(RX*3RNs@oadXg7CucSq88v(xksZMAzC?Y;2pM|rQ-84NXR*t&PrJZc}^ zYBgU!*c!HWgL`tj-{{u(->BQxa&_w!ZFp_Bb<||GTTdr&ZMQwD^&0(w(i`@Mt#<9+ zwZj`XYdbd^jryQVUDoXEHygW0^>(w?v%0lwtxls}>m2OWn_CCnqh7D3wrkCUquY&x z?W4o}R%^R0OTA9N)!gYy9x_x&bJy&svNUQlgJY{*8@3wFCU14H)voWiYqe&hHfZ@$ zv)67OG)4n?*dFxT-3~BcqjAt|?bT~Hc8<38jv)8P(2CYVCaMbJ!TD^9!zb!RtonEuiGitrVT4%qnZ+Ep;r{0vd zk7~_!Z=|>Q?DlzsQM=RVHyTH6sa?N!FdQ8z`f${4_oSZAD{A%STKmr7wQYU>-pyC{ zU)>sf+}irmoqJo&og1xAv$lJ?Klq7Z+p2HBx>p|@)SC6-t)n}8yLw+aywlj{;co9s zPVe^1jaI$0FCPq_Z*^q&T}!H=rq}B?niSPizqfODuRGp7+G=)p_jsYTgZ|;(-qwT< zwl&1>v(joiha(ky?3~Kt1)Wa{nAmr*&%JO{#ydJZS9W+N8O|C!^8bX z>&0ff-2mcj8SQqv!(XFYjfULZJ?P3xzb>A<+S=W1jJiY8TMj$J?r{6ykV|HF|7h!l zy;i{zI_e#C4%_WI-{AJnDCySr8;&;UkGe#dxA_!n_l`RCqrLXQc8xD-_XquYeZYUO zH+yoU(`mG8`&-h87tLqhs@Ge+F0ZLSV71g+?N+OeFTB?}Y#!cjH@oeldh0b=Jvid& zANJXb^`@$~y{&#%Zr5w=y1ZK(@iuzh#$eFv*LxjvXQ$I?^}Ef%px*8_dF}0cwWIxd zkcSd>_Qug+xBh>T^kzG2&;{d#cAdrQ8P>vB*1se1Ijfa#Wqas>gHnc4TE24%^+VysI^Hd)2YG>^5tR5yDE{tnGGlux-1v?Kr8n^-XDCzaQ;GU945- zrhZfdJ63k0R2Ns}{deQ=MpdP#onaHGDG2t2B+PN~+Sbk18!fCJ$6oOzea|jt(M-!C z6q`m7LQ7{jAhq93OGoJ0>d?AU8&wNCSrSDn{4hvaw46*oYDeU=rEkl8ee)`Rae4L1 z)#_Y@Mun|+XKOoY2BU6g!XJSm%XYSw<*whp9o~&@*DXDVZV0xJepO7h@77s$Hw@`@ zshN6^utoWM`gr-EoE1vE`hHyyj=h(j&EwnFO!=mE0-mhnmX1TRzO7dmH@6pMU6`z7 zDcaENR<#tSDLE}Y=dM4sJ@IzvrqTXBWINNiw0L{>%;l8{ZMo@qmfGlUl?$y}?V4jn z+?8XSS6MTJS#-0VM>C9G_pe6pk$-rSM0b8k$rtCw3|iY!HJ!Jj^)4N|U0*xb>;yTx z6M1*#Yujk&4qv3w2J?2RH%-NH82u*eWMP#ZnoHx^^h5z-7@QBqvxM%P6? zzJIyVY)kDtOF9j%4Pmhy1@lGuR{vu8FxlFK4SB~d(-=BIK(-y3tYavJ;)u=^R95FM zSi${vIm?j|!>W1CfAnE^&h_ZxhCDA5cTPStwx|fRedS1}N^9Rp;)X7-Mv>1$9kgVl zjvfEx#S2SH;$V1PSgy5edBlp8D|)>Os%&%>LQ}VKZ?&>%A|z_Ns=2gU(9u(7v$nyn z7i&V-4zdj*bd54GYIBxMYh7i;Ofq)L$cBaMdfRwY4lZxINeSsoqMRMcM$@?T%9@lI zURJfKDxORBNh7}av@g?c9olDd$clM4u_^ojY{)7h8`0{Lk2HibuFmIf@XpIorJN|A z8`Z2&ik-F%WC-O%uK6WZh}!VX?yA0Rq##omr&Wi%_=}sCVt6oO9~ws>7tPu0ngG|5 zLOF9XkYDP$DYR6#tqw`m+F?-6Tfa4?`f*wonv8Z^CFB8f>Fq1S3pf-hEof_P?z_vo zKA!@QK;N)GRWN2~j3PX2B1GxE*}1KIIoM+$ga$c3fR`P7ry(Wdi&i-x(iNQ z)A>=h4{fcu9p^TJSZdg$qjy86N@E)dS_A+YXJx6p=yz8c5kCmM6vPj$bSv2IA&{a8 z;+#buli6u!tapSlYPJ#UQ;-QZ`}U+G>BWT51Q zlC3|C{_VKhj`=_Y32Z~xi^}=up<;9DKD`2z`5@;*#V!_kaO?%+cU~QS8g^wZiO_>5 z3IJWw;!Hp#Vs=~AT60(U#F@Li&vGIM>}os``J+%W)J-eaVsDBWn2!QY?HKX zGp)-C^cM5w4Tp=f9iy>Nxp+5>W4qz(btJMg3$kMwxDFB~sEOA&6Vszfv)njKA3DxOtPmbKH)(}JerfjUrIx}&f zmMfv^I~>&WOcS~LYg_Jgla^r-t^(MrJ=hTU+A0-8ZmUVesyVvAGD%Gwd5_mr! zfi5Hh3WKTog|xC2TAo5b?s|Q1-A%`VE2v{h%baG;cW;7hPnD_my=#;zf&)2N3rwz@ z(4}BGBoGE{z7_h4D!?l0J~mm~8EfaE4@0*hfc-g(>f}OjakVQMqcpd(;VB!S98yE* zpEnbrUNw&Tx?-Dh?iH|{3{g~ry*IX(vu%q`$-J6ZZu>ZC+X=Qg=#;X$^zutK2?mak zOkFeVNc}?#;0$Bd_{>p%0jrv}%$);;BaaQzs)+CD}BlPX^ zu#r3A#$LDk7~6s*4w73nNJwaoGXb-1fPI0a*p_bKnmZ?TLwR8NrVVL>`d50&9k0Dw zM!zMx_XOWV<+%*EgunEDM+j&{PGA8m4*jr(YUS72hAeI8ck~XD2kVLgDcC@z5w}Gz z{5XxxJ}){^I1zUaL47(OPo~-FKpp_kkZ(ew7d@!5cEdhUz9==+uWWw_4)*&CJHOJ+ z5;nGbk&d$=IDiIhuCu3K$ntr?A$l}^N=_DN2>?(PmHgSee;34BH^FeqG>PZ9?_Zw! zTOLf+D$bvFpY+2Yy(b#fV%zp?>|+5;{#z$kZbQxhL|ak5vzvc7%X-w`fZH}|G~%f; z$+<0-WT&d{hbqZzzI0+w{ZZv*cTJT`e9x6lJC<(a_}Ee#Kb~uLy4~q=?Byd-EQ0H| zUsY5j<+Gyt*=}E#6c<;DZK+0o@5i8uj;i^FJZk%g)8vnS_A$Cm1Zz`QO?fAv4!lzv zAixG{qZPAmyx^VE!zg7_cX$58fr3HEh@D1C@&pkAXaQ^+O+ymqS~xU31xXIx@1}9v zn{DD@FH*8C_<@jMwDnwZC{#FnMEL@=^@7~FQ0X>0S6Yg8O6_EwcUcCge7aZcAaJ~pRU<-6-U&B%a^Mbnx|v6oi?FAOxY%* zfO54)7;s>pd*XA^vuv%dL`P}aGp-G+SylSFoV+!%Ev}04WvUl5mX>gD0T_@Q{8=iD z;P@I(@M)>No$^MM1+l{i3{(xlvAZa3u@|MZs& z83%5rpKRW-x*f4%eKS;<*bo*d6{ z6HD@Rw~X6ea9tkxgm$w{J~3l|0A3dtum(MKqDcFmr_rnKHK~9AIV`Wi8;&!r|Lr(^ z1Mn1n1#vfPC;DS~+q~*>0}*4cN4-03|KfD^Y6EXMQKq`^fOr*NR_*`b@-pD$B6~L&d76C`HosX-U4|lE9Fo1U0%oRe_Hi*rdI8kVd4C zG`C`?J4}H~lQ>QAo+NZxuKRX9@a5K&swjC*q`a){BH7h(fg;!vD#Vs-+hYF$P3wyK zyes?VBwN6ZEC;QM^`SeqhofICp_!r_^dC@FStG7Du<~`_MXGyh)aB~@ktK%&>=i9nWsGTh3Xe~qA+j+Ir zx+$Qyj=G&{Y8Mb2gxy22@@qBsQe>`c!XvPhGxSAY z0H{M?y}P_ii5zO<$O@e7tb=10A|>|%RKU-s-EkD%#1V!5$(chl5`x%2H5s@D z%mT7JIFm*B$m2M(YmXiPBKm_KRx5d<1knRdAq1+0`c0Ylb#GkHeg?0yVzsmsvxAXW z&|Z)oB<>bq0}Z3#1cU~bW=7@G8zLjzu(2EmIBJdq&yZ3HLL1(^quTFmx|Y>Ese3&v z?T#gDsShQqol81ZV}1%3(E~Wu4H!1qGmXAq-x~Sf0R450&E*L+dBj zoI2A$P^9akSd*EBV#RD}8W{90Kl`bHXs~V5hMHXB9z?P0LJyD|A!BW1CxP)!wzgh3 zis(-ro+@aFngl}q2p1t?ShiH~l!E>Kx+E5m_Z+2lEy*mQCTj+<8){alQ`KF*&XP10 zUC@vtkmyF%&rUpOh%E(o&jGVj55jQ(xZUoP`S%$c*A8g)C53`aj7%HKYde(ee${#? zED{{FDmi*jD5!yBeMZJ6w!7i}$_mL6&0Qvh4h?Uk&L9dXc#F1&d>XEk>3=+=gx}26 zrYtGnmGA2mZdeFeo+lt_SGfkL=D__^vu;&hX6r7m?xj@jwmJPlU%ha?T*4>!LP9Ha z!)4tTeY5xd)|0{|e2EX>K*s*$RD8+}C@gOkyEANf{1N$ag3)-;Q!P(MO{*k3&+pMU%7&1!}nTonx^rfzP_ zH(Dk4iUEI7yoqqV5&ENy_jhLF;7F+_nZ(93)eyr!{mAk5VR?b3a8Cb>gq%_ znSefw0lXP-l-DG-mW0da%z2sM6C85T2;ze-^rP=RPscaFK_EOEsosD#(Tn^zT0c-s z8tVQbj3E{j`VdhP!k?K1sEuD%wo+j)bq<}UR0)l#Be4rYf~EXOWQqb_+QyVmrO--K z03{9dl%|k+o)3@-huMg6?(_)h=O7h4^sz>C&)+q7v-pYCsFY4vso4 zRFx_Zv6L(e2$Z9caP!DIsoCBRK==B&T&{1rI^F3O;64GfpgUX0%wll2ZJTI=mbY0{ zvIHUvGz}!*r2FZ~_h$`rx3=olL1*=BVGr6K0s)FLbV^%}@~S zS}7;Cjp&MPPiZ$6hDQR{SQ`D!mk1#~9S6a=9?msklU9UgoS929s)>&UC|RD)t0%9D z%^EU6g2y<&pgP7gkg2$ZDl8cr#HvaVzQJleGKdDqFnYMPQ|~3cIJxuQwXD* ziDnAeFd`_*E0tWoq4v>j*}39zkaK%0o7*0lc^T)5=sIu_8=awLH{GzWm$H%_8U=j0 z$@Boi2?CD8`Md!{_*w)!i9I5N9`b7Ev!D{$cQ%Y%7d5MoY=rr|J^uI8Z&&3JR9N`~ zCuVW&eG7pYMYe|fqD~e*GmE4yu=Q!l$A;oqH52@+okEoiHNVQuu}LY`duBV@Vj4BO zgyxXe5XrhdoZ#piq$>zMLL(xBLfJ(_(&z#x9TaKEd>+u&gVHJeVb9o0gjt}pL{Q=~ z;;nJyM&uU_jEq!-Z=rU8jU>i`V1ZB!x}5bm@lQyXnh25W?)kbSx3T!3$eC~CnTmLN z6c`~@267=1xt^~fVGI=vjTu~#6~D3X(S0bP8rSGD3L;a3`oUj#ZEV*xLY0qED1=tu z-#$%l;!Y2TQ#keYxYhsgt@|y6kr=-$2{Z`V`IjW>quuhp={_pjP{2oUAZnBvb3zws zuQOj)8eoNV|Cmto7)r)>!gM8yZ0{FVS%ZIjAPFU}_fLl@4Cp4Cv;q|XEY;o0xh)4_ zI0H&jOUku99tW6axIDIYm%`PaBWq1T2t?}}H%Rj~46~eom7hJLUcjwKTm`&R!~EaQ z6Iq9FvLP%+I&}B-FV@NwP!#HRcS5D@EKmmmENBYVC?~*Tl@vXY*sV?)Bs~j%^KC^C zK$WK;;1EoZd*AP+7-0D8z{Vaat^G@wWKd2&S6h$Dh63ljib_cQ2z)JKN4p^gP`ul4 z6`>!{+;2I`R^~tt$io>NM_g}rNM(dEJ`c3xh}1Q3Q6I)XZ|=Ja?y{_FGqd7b-UjSa zl-DXH;m#PpbnSLqznoAM3EcR3dT}loQ+zTuR+T z86$Yx{M=J%aLTJ`24*D?NAmgbDfINfHfqO$)LSCAEz&hm*%Z-Sl<7C-1NKo&zILV5dI0$bcwvMgQMH7}&2h1R5Z=HB3nNQtN`TVw_+PE%JDveGK#xXH zC);m+R`i2@_-d$j-<%q340O0eO@M<4`q3dRL>8c5iMdT!h`jsJew+a2M{MpxEkLSyHZMGW2WH{7sAc zfkYA+K@1i3&Q0B920IOahp79I^~vLKE(M4OD}V}sBpWEXBBWWe(L+u-$hIK-t8O;=$;2}pKbDnV;ED$gipD85)NkQcFnDj0ENT-bu#U1Nkj zVP-%mVs8MJh@uF3`)#0X1nz!B(+$_#>FeRMTE4Rx8@~jeb+`V>cTfD) za-90(7SqiL@yp&qF~LC6t|dI291@u=D7@LJJbnIVST=WUeR^`d0y@G({&EnL7_Ww(>b$Zd%e#e z3TN$dznH3?b$NcXhn2xYremihF-mlB(n^=M zs&lUq?BKkV-rp_3O74|J8{VC*b5ej9h=?Sybz_8KoNYl@t4HhaoFUb!4 zTD=_L!rIl9aPxBeXM_KfLNp*BxRj+bsS??nZ^FaaewYzIcv`kB*t?K?cM0?~Y_1z! z?($8J3yr9GK)dQphj7@1P2wBmgR5+;hHae}B{4tzCTB~F+qNqIw1X6dL$8m~FNTmM zO$J8htmMazy=wa5@N$%EJ8g#3FLyvlvKAH)Ke*k8ME%591y+U7e6k)t62;3I6r?T8 z{A1egqXje6%(CcYl0Pi*e*>Hhpzns@au4*42mlNV1rq?crP<=lUbX79grMsI?T^Evco0 zi=4jkLjlN>_J`1YiH_vO5~z)+{kD6!`cVJ^{-n`wSj|Rb*@7P;aQ{>l_k>?JY=d4y z1ZnlGUcVKb8Rj(I7t}^VuVwdPM?Qe|N1|7C`T8FT#{|C^{5yD5A}zSpHSSS%ztF4S zq*MWHXSK8LC|`cw?*7Rw`7&f69$&r?;tqTWwJCsNF~w+A9W;OgThC@SNK!#r0To&Z zrT$RkOPLTqfEQF&?K-pdD%I@OzNqLnE?HB`n3x9JqyFwWD{pYh?f;mFUPBM+5t!5Z ztoL8w5Gv4L@$v->R9U*7Q=EnWzpKRRYAGtpB9BrE*R)4d@OBHI54vy5`OtjvRT*m< zUBBWA$wdO%wE;FGffa~^dN+i}r0m#!C#@bJntT{1oO_%GEMCRGyRt<-l)%ZUDpY;w zrv%N*fO`BIXl;NH@*44t3c24p^9qten1sSv;u-=0RC*}Y)+Rq`h6e~k=#m7J0$&bU z;#b0{dWLc992B)3*L|-Oh>Q>2GF3QQvb7swSGSaB`@1d4MR4I=9+4iFmXW94(4*29 z&m;=!?M;K?2uno%d08hs$ps~Va6maQYJ^VNAQn(q7A+EsmRQcJyouuBHY{BiGIe!P z;Cg_d$9Dzmin`)Z1lho6EDmcmz>*h;T3&V!!GOAG?@bH0O>xDoqfmBQg<~N3_>s7Oz!n%+Girk^9ssoaq2)M z;mvW7M${$8W)SmOJvRE@hkiyAIKEt#KQ;5kAory>BDFxq zCLfL3Yzi|r6W@tR2v`7Q#~a_&x2;5;st;w;i!cagdsu&TGitWE==q0e{tRT7cIJK4 zyiOL|ouk^t$uXCj^gcnGw_ACh@&IiBj&+4`_-UFI0F%%c0SQBqDP`Im=PkwrpcqWW z?5n(xNhQ-1w&=z0<^*#x=vuG^32x0_?Zp;OWhGX)ZpZR=SkfDk|A1!(#fc!?_C=9A z15nmayU()6E8DX@qc-DU(YEPl*i>0500oJUy`(P_?z3pCQ5+zfm(PCm#n1leXV1QP zezZe$3Jy*n_|o1?Lf)SAL7yuGKctwV zlAD{NUNyk;)mx|^(y!cNcEjFER|1^|urk>oSoL zV>j?QT!qxZ(d&r$4GoF9Bv$D}n6sVW?chIgQN9q+pyM*%-%WTd2IyhDPV|9fdVP%f zStnw%r@U(HwfN-i06el?ciIFXftBL!Q4-4V=q{W)b_H(nM}-i`mm$RStyol-KUZi1 z$JzI&`-mgNO<)96!?k{Kl!783IJYHvgwQwhjolHwFaiNEjo#ufp@y$=I>`3|4%nAt z*im;u8{(+N0_K5ySfOh;o5D|WzSj$y33jnaH$x^Lp*E45Tr!X&1zVHDgGDe**r;}t z9~~zC?UxYE+!l&DmyJN*_iVcX4zmSfB3y;4ayJE@&km4Rq33WP{s4Qxb87XB2NgjI zb0_}?B~**yRZW&1(%_$BmVzhK{cDsLr~^D<7E(6MkWa)G8zymRiYVYB_o0KJ%kS*f zZ@n}i7XmSW$+^kk{h-t&INKuj5qb&z0>@S@yK&KP^=a4CqUxyEw}>mIo@okzM}vir zW(3klH#7qw1Thq?l8Oo^GfU?PdfC{a!BA~O;&JMsbM!icX2AJpGjz6=KfEcIDvyj1 z90kZ$Qq==JaSBC;LP_3Xi&gkZW@rmoj1l9hqmj*p~j-`Aw_bIDW*{O;?=_KLg3Rx(1Fz*Mr<&Bq(89}gFLFT1U$leS3WQIC+ctPw zsQLNz%PYU`4r=N}3DXjJPnc|;7*sFzd|dI^`X&xYe7-zxKoSqT-OV0A9q5Sy2;i#Z z@Ibms_*FIq%o74Fc3zBIM0P?787?j@IlUuXklI)-j$CXc8sPL0bkTab3LJ*FqAg}N zTerLn)$duqeyBXoW^TPXSh1!7Sg%JBma0k9D8Y4zy#o*(pB^$vKDg0tOj_*0-9I9b zs9iF(@BqV=`ah{UBUOm}RUz5OoZ@$}k%niW>0oe~ zkLVg3(+)DO&s%mZh12ON9uUbI7JDqR<{le>ExQQ3so*{uB~`<-qgI@Hv-b6RvjMuC ziFCc*m3#o6oFiGjSbzc;W6JXCmYzU@xIegX;+>9>B8nzx3f*=;)Hq(lPy7^PorcSV zL)5dYACPXX+HB?Opgy*hVC~-`QTMrhiy(}K+O#LsQ>sBF=O%?igQAYM288oUpZGCW z3<=~Psj>0PERQ>eMo1X1zj>D{IKb7nZ=}vt396Q?b&ZLYcYd_nWiw7cJ)9_x@jfZe z*hhvzO$0R4rS+@zI?ke_1<4k-6`a7`o#^)v%n+G4zc}DQCZvvj?)&YZ+H<~jIsGkmS3u=BZg36?TrP38E}eU0Ip>rZHWK{vp!6E)F}5;RhZlxa}^5aZo2)BNEYUqj;| zFF>N23H2XteW>}ICYHyWreazkLo&`BRbSX)o+nC}hW@kdfW%Jp!_5HlqZduhyY}I| zAc}#$fS4oB<`szEr%Qv$XqnoRTW-i0R~Rd)o&zKI5kKys{j+JVa*eY0#p67w%jKGYX!i|FBG2~*k zV-e{A<=F6~u&lx~5zpiVTTFl;r;wbXUligE8Vc4Q5?b8tGT=&{7G5qgKJ2<5m`w5g zGS4x2wUZx7n~M4iYoiKO9GqkDbvLH*%ZLv+YCM_-yO#YNMx! zyfObDT{s`nX)t?gVbV(6ea@1&cG~t7a}!lOdH}pQJV_5(#vM^#1AXw`;H^ndDfD41 z8QEGr%n(L?_8k^U*lRR9oa1u%su%T|_9sFNwOu#$($LWlC9v@Vo_e#3&Zw)74)B!Qy(X>_CA^Hv}*f$r=EzI$Ab5Uf(>v6rK_Z zF75+mK) zl=&$dR|fWN2|yYTN@Y#wzEg`jM{bn{spoSqU3-tl*5)|-b%~sB~);#9sf!8 z)j!(8+*E7)umF67XLc+qY9kJ2`cux*s*wHk>;(>G&MI9Q#vfUE_9+nu=o2VLZL_j^ zQ-Mwh7;}|Ac-p-2v;8l-@RXd_4c|Re9%G-|e7!5(j35*77%Cx}#;Qja5W%Aa`}C;lq=t2$fYxg&vZ?(^ zJ5`Je)8ahoEzt++n$Kim+7IPC6TKH~W$4+p!%zdv_JA;1t>}d8!fx*Wc3YxUF1QRz zBzd}hu!vEpMO+FCG1{H7IpiIOys<^<;nOh@%Ab@5BjN;|LG_2(jK_@pk5d)1o=~0H zhRIyOlhTC~@&}F?FdACbfF8Fl6e}_sN+WJv@>qL^MP;+1!y}iEE9IW07TO7WW0k4J zn5CTE$4(hR*8?R4aP8I7C)Xne@50VdIO*R0`0m_4;kk)KJRDki+qoxq!#1ONd1sd6 z09P*qz93k)w%v(lA!=dCd>kpc;YFiJRbu<1+?YfW9#c5`Tpo)Y21&X%nSQ<3$Fbw-kivWVi7@ne zOR}EiHrAE6Q~1NNRYcrhjBiOEtX=edWA`rp;OOq2=tGP51)dg_E?!AT(NLlofuq%t!%?8Q#OI9^SkH;;5*(BBDDf(K*2 zs5hTnHy7;{yX~>4EhIu53BbXhAM=OtO2}eE#GO#_75535c8BSey|AsxZwFfrY@iY? zV=@Oo5;)9I1o7o&A6q6>D|zGBH^Y@q(UOOip+kiCHZhbM8vtH>G8b`sPq##eMu=R1 zv5F3%TJAHk|2eKBLRM1&nmc!iA$p{m!okWj6j4^kB|xsDA&~TWZj!;2h0}W>_BCK> zi^H3O3335?v@si*)$fP$*t6s3p@_?g(Ua4JR(V(6G3()DUpCN?{u>fTy+II02_=7# zje!-Y%O>s@+TM=7-BRA_d=Lx~ihbu|OD=#?zMS@#DWOsJ6&WN}IC8-#hRI0g`MkM^ zD7fp=al{LQXiuhO4-f}XL`aF8Fi<;&O@s%UuPE=LVF=&r0Gl&o$Ppr+m{S@UEJ57E zi%X?JIs=8{l!efG1-ZyIwhmuO0jO)*??5}WyR{WQbXe7-Dc0cPbpu!T1aB4Ugr>}- zC!g#&eIO9&y=s=9VrYPuAJsLYRS9K2{Z>|3fEFtjl;_S3JMoD=LShoAu29<)!Jb(f zO|-QnBCz z{BcWHE3-36^RXk7SUcm?So|xzL;xrV5R8*rHnK4-q%b{Su=er@$wlhLcy(pr zht{jSc|pSx5;Lk6PN-HD%`iio4pG+lw`?*xxPeW2|8_Y%?)>LF`km<7Z0=;s@0xP%>pAwSZ{tcf3XRn@tQaGvbW?34+=6Q13kK`NjfLVb7_ zXc%Jthxu|L(I6C*Z>WPOTsgpaJ_7l0{*6CEzTaGvH^lx=% zMt|>pX_A`iHJDMBY(Mh#1f!sD=!yCE#FBlft%;*!B z78V9BBza3NWp~)&#k#0GOl?~&2qR(|!bQD>doD08qP3x)QMnhfms#W~eO3&%gmI+9 zpXQ>o73?wEC*fuO5KHhoO1G|i(DxiZDDzccBS0JZ*wUU$UsZL&EWtAkn%WZdOcV6y z_uRMj0`EaX!Vd0tF-(9{)A0 z3uT+I$mw|47ARyZ8Jd8H0j3?5`PjrutCpc z^oOmx@PzVaV4qXR^Y|hcg&oG2yH`&98LrPBTDoIkiw|^=a1B%!SP0|zfQb%m;$H;h zbQ}OO;Mn0IhzUHs7HRJ#;t~)r)Eh~b+2SN4g&_7YV1eL{Jj`ZT5j`>2qk;5rI-4&V z`>v!_XF33U0mRrIRre0Egvy{1%rv0zpgn_(%VdkK89qcZGD=oxA#e}JMOuFA z$ff{T@z((OwxU%zyAs$IO<4~~AtnZ(6wm^>Yp(R}Z+#u&JelKIQdK~fFPINfFe=~+ z$(J>2#q+2Xa@*DCv~j?1T2V-P+T@7Yrw!|IG;Icr2_0PH8LsI}C|;=D;~i1Opp9WJ zZ>Y2?s1aIjLFuPI$wuq})RDJ`=?({(THR92GngTauj8{Gc!}aav;7rrBy<@&-S#kn z>wc-~Evzq9*MU{6gfH2oF$E({(4tFU&438Y$N&tne{m}wqG7jr3+dbth25!xwBWKWBDn_*f0Y79N!t$LfDa3Zs+lM7`C7} zQXh3bPlp~Bw!Ec^rlEusfONtBimr8v2y9S*2@Fc8-&D|`5lS3rUmBY6dtkOjKY~ZX zkIt8X{>UV8Oa(?ZjL;>!5aiNS*=kj!n<_{7$FNU?c;M-{SLWQBt1)`Oo zesmv2O&EI-TZ=ouPaK7O79e_Vbft<6bP_3J91f6wgdD~|fC#9WqraEhpY#w5UG^r- zBhoS{2z4Ik28jZMNjEk>aY&^N!GP}s`EU<3^dS|1Ibx3re#*F-3`k%G4re1e#Mq{k z%W)1(Y!Bt1$u05HxFN2B)D)HLSa;A~PxD(O$)NxvWZ~n{vB@>@GYng?UX12WSFI&A zEbPHDlm38-csbdun1%TrTFHEwi3KL=4HRSy-8^FsAVnL6Xmn5z$|BQ40^=O9wh*-% zYNDsLwBsa1U`A zXYDZbfX*yLb1*3$ZbH)av#X8F*ZIw<+wETV;9_J1xGB2G*_gDSb1*;`V1IgF*?q;^ zF)(auVLP+8=mo0n_ipAs@*GkED=5=uB1)7PL|f*;7_68(Gl6q}Y`J{;&lBP#HViBThBhtyjcc4AGY{e_)gZEKS4@R$7>JQ1eZ>>fgXUnsbOM% zRN{f5-xu2qoYwCjoM5`s#S1PCgvnk$DXc(wBlPJTT~kL*G-B2a_MvxAt(BPRLE;j*Y-i>rhP~4{IPxNZcAr~&D7msjHuMtEwLDY>H zFe*GJ5FL=3#3pHPfjtJRg5VGHG=P2mfWNXHqnkdWFQ6$iG(dJ=ahLD_SWP;=KlKZ9 z0npJ|IlX?CwO`TZL_1S7N{73fRtyP!;{-bV?9`~(hv%p;Aj87w;uon2MhK=y3?|f@ zv2Pi*W_)849ahgMeTgk`asW|{5j&D*=Ly=A5X-iZ1&x+V?4-YA&{yn=qWulCXf@7= zC$mgrA3LHJ6wNfv(fg4^uQ&%Mk{79QXU6PQfi22(3S2)xJuZs^gW`;SZ)+dN`NV*b z8|Fi_F9t@JSb_<*IR2NHEtz~0H%y?R2*)S=Q4p>eH$tH0LGf_tH3|pXJD5s=HzUr2 z;!a|8q^92W3$5tzq=*+4{s7tvH~>8hi7y6$Do7}MJVoRth)B9$h-omQALB8-g!U3+ zO+CkPOQM*lDzE{%Wb3ja&?c@5TU1%0|MYUG={!o0!iCpL<<7q^X zUNfwtO`TRV`PPx$u(j2U#3FvY@DEcdUL&3g>K;4*o4T^U&F#!VgPEh9OEtXP{>{RU zaLuh(H#>b7I|U#Rs4HTR4DEUoHag-y;C9!q7@`JWovP`sJb}da54-6fzH#6ECb+GB zN@J5LclOiAE}bCV0tN%R0%bPQFUlwb%bk(D$a`cFWINap2HN!{R2_E1{iw!q?5ND_ z8gCPyp>aLB8VJ+S^-S#R`(%$!z{t5WnGOxC1%vyFHExql-M;j5g%l#>47I)kvrhDl zk#g~JWy3)7R9+4LSx7%Sm1@H9@INsz*-tMa9_nqBn{geuts;ky- zm;w{6P8`eWe$;y-9z~72HKD0s>KC#+Le53DYW1hrv~;&+Cq6o02s~mB%2dn!irR#;h)q8(P3X&Geb{cquud zGzi7h&Q#;w?NPn5XR_8$M*XaN|L(MVPX>ddz^YfzD@JaxYa^pGH8j4B1XP}$&dl~+ zogaBU5gnYd(RHBhF+>-Pp20pKbLK3${;A*f`w4}S_9$lZ%&r(U{=xB=z5ZF1RkfP( z;XUn!n8`3$qDJ)mj(Odd&n}PYwtK|%Pe6tarDS8&jI{51NuL}>v60v;VIvqC1N8;@ z(NFu`o(u~b3fT{@zxmyQ6Xy99(&mD30)u4>91kt8UZ9sSmP;bCi^PJ9O@6J2?eqJt zt8Yu1iR5qZ)q767O6D0JhE4<4&^=BFqhH|-+JFHWgAyB{A&^v}O4_WPInS<17%imd z(}2Ydh4xzl!EFwTySOdOXt<{-k67A64O|Poy$>8r1WWul^cCGRY=ka;ngt<93hW#H zMZVgh85nt7d~pdDA$y0n(+QH1CUo!xuB!2?1CF(_PS%&32>6lU7+`{XIf@>B{}uC| zJVzbu#3-iyUi`CP-bs$KR&2VRp;d3o zH4b42nTVtDR=8_?fp9^pFQzb)`6#(X1LcKCTOd;)u7sy=hC73usgPt$df+pcqrd?B zX)^i*S%(ToI%WVyV_P)V#35rqdN^Rx!7QG?InqW%2amf3QfX+c*W4N&cQm{>8O*Im ze~{SxL48)gCb0n;J2pUDRiZ=`u(3%uZ<1-F@s+l8y+HcG5SeE{4HHUw@t7+J?}mfG zhb`9AX}I}g=GuW>oOnX5gu`BZ%ZyVFDwR}T@Ai1!dUK5}vL_ln!As2T;KD%wMdlja z&;HK>?rqqZT}gOTqWAYFI?bpq+?X+IFBXu^A=;cU1L{}waK+t%uZiTf`0)|5bj9cDvX zn`wg`s(-a;CO~#ek!T1P z6tx}d95p4%N8U%A0z*%wZFM{2`e2HDH3 z9AaA(%BWtTd)Pn7({RKcb#&tcaRYl_lU8+&7>6O1(XTWNxqd-SI#L55)tfer6<;JTjvR*8pP-}| z)cMiW-Qg`}dPtm3O?^sLDZZwY0i=WDf-~SV%x}?@X-~52n8|3=KI-n=a0AIv(|wnt zcJu5Y_o3;-NDnQe&t&~7sqU`p{`Jtlz%pH{9aTKGR`vQF(qo*zy?@W#PkDI>gT(`f zXG6&5#8J>&zX}Y9=H)BRkLzq<>=wdW*qyh-;F7NaDGX*Ky};{`%_0LquX?0AzMhwo zF641PFzt?c571)cL;=n~Uq)cU7oi^lCf3pN0pOcX*qoGoBZhS(n^w ztf4_bwX`2_<_TO(Z-AoAex9{&JgJOCxKz07nKZ#X6V}0X@8MtIxr{#b(-tSX+dZ;C zHTx%J78`F9IHL2}Dt5ugR`IJJUss&`b|AOH`G6t@hAfE=ANTlVLP;6`75D}juSXvm zI~L`IO$<>>ErY2!gQuYp5GQDyE)u!kv?d5qIHyE9Qj8bpkGqm600Lr%hYCX%w}Sw~ zz%?Y}&=<4#q7f_RhO6800ECPt@rCYfuToR}uV%>L=%GweP6C(K46h-(I0SWjX3i2F z1`H*#1DdEzZ=y8^14heiVv;z-Od<#%d2A}f7e$flvu`g5TNKzmfR!s^beZo9(po&f z*dYn|P@>A>-2K$#iYzMp~2DVK;RZgC{tlm=!P{gMPb# zUVXedQ8!1iCLO!(9p#G68DjzifRzYD&EV#kGP{QMe7b+o&P8MsIsDTda|VXxK!Z2g zf|6g2zsGMcY28JnfmLal)I=>o(<)5u-@iuRj80WrLSRc!bX#F?-_zrO*8v+5F8Fv6 zWL<_k8aZhHmuGH{ncyGDWF+0wX2Zd{D(1n ze%?Y=v|Vx6G>mcnfeAukyS=|>oGNq-0tWmXvEi{HR@>L%hq7)lfOo5Fx!{%I~F6PbQFPA=LXsK7%RsE)u+s3|imf@sj zD3HvKJ`S>bGt;99+q|Qgjm$~t%x>h+GL&wjx0n6`lD7<7#~6YAzdoy9q5#pVj5%tw zS$X&m#=QTQ^S4W)Q48ZTkpO*%BszYH1Bn2}V?j5Hns9!oP%6oqq%YcPz}0l`LsCM6 zv&lgxjJ{<)EEB<)V3$53t;5nW^gaH%eQMwT0pWqMHFkXoRUE4dyljHr9T5Bq7|sj} z<`q!Hk^rDmU_YYhglQfsh@^A4Lc(<|5uSq_`DVgK+%R+thR+!#%z1L>pK``bhL@2V z6XVj3`jIZ&$gN1+ZP2=upV-*3ZdLt;OQv+CAMGvNI@isAH5|sA0U6JudyoS*W(Yj2 zLVRp&F`7NvdqAM5N#1mYk-_l_Fn0t;61$v(9vziqGhjTXlaZ}k z*ovMPMj&K~&(rLNbY0OhBc3<)wAco``+@u2#2{sYvmJb>Y5e z>#01=u~uSAuOsoxKKN)g0O6DgR)Y${A+sq`027aJoSVv>CBH}ji zzql+tjqx*{QADG}m1S)cs~_TooH%9{@4@0}JzKiejSU7gB)Rc5(w8i$Vx7&<8u;L*zI0v4?rLmP88XpIE1 zgPis08p=G%fiSiNBT_@oxE26O4D9pw?Kmun%$~- zd1UA}G)k@ZQHP;50-hm`HImF`MiZoJBWjXyy@34f=orZXauA>-an5g8VfUt$2mXuI z1u57W7KlYmo44cLvY?08eYHx973JH|59Nx;rCRm<5f(lNJ z(b+HPm;!N9f<|!;Uq%yss~|JeVxxw$rl7y+V{gqX{DK%c*|nlxXq^!=ZyjbRF1d$M zDKD;~v`;JC1>>6vg!6eFhsYCGJ=hL$ZsYymmzqxDA{XD1J@L_Qo2N9r5FJ!Grgm<^ zh+(W`j0fwjA242I%ohQER&V*oS_!HlK{#Sd?9nbOP%ENA|T&O#n-Q(S<~ z*h27Ez^u4b*}t!EZ)2BR$aC!I^h$2f2AD!mp5`qW-DpOfqOgY)gH`*k9LV8$P&SDo z4`W__l8blrH%ReQrY$lw!=23fv}2;TDd><4-R+mh+lwEHHMX%&EtLu3U0;MH$Ky#k z)TAw{<%IDRLXG!5C)Zi?BF}E6gy-0nAfIrzYro*ogXE2%`K>i$(qJO&^5bb{2+sr> zCfq?ELB+vIpuF&Rm%3*HYc~vAicM(7(1`a1)6|);CvG|DY-?gS&R68GW^>sNcVpna z7o0tENu|T#Z9RONu<{JP zC(cmgeMhSsWxR@0S?MGP#6A+>+VwH@FTMnZpU^c91g`##S}uPr&F|^DRMQKLN)#ow z{+!;9R~@h2`ShVZ-Q#q(37EYHszI@{b74mNwWm&t4a)EEnt7Tn@4n*wD@4e4f46tt z18x3*7gP(xFW!JKfM6~}`c+aPzC7gadZZt!*u$OhSI-}4RQxAf+WT<)%>RBZ@lo8l z=64)>ZUWp7Sqs-#TiW?~qy+h2G2g31S@`%)7&9N_`~P}|w4-v~;JMPp?GF%`Kdw)V zEvABwLm_wP$G7*}r+@YtuQstzxR)C=2t%jHh|I*KKkwI4JOul@kbio&q@ubgWq5!m z;@w()fjZ0STvtCQ)01p86A;2(FfZNw9>`^xt%Hw&?#Nr{%L}^q3G*FqKJnKZs2>L0 z30?qkBksT5$Eh=C|LTICe9a&q>Kq1MGc{c$Nxe($%AF1rJX6K<`A zb0`vqp_J~*F@h&frX&C}V1x1%ew08|nfv>3_?NsDsGDIDVpkZ(cx+*q`s(_k3h0LL zfM6Z6ac0xYO|;+v4ZoqOgIqUX_}!+8)_~IOVD)jAd{6DP;J+bblCk)66xvM3!3XbO zH*(xFHq*TR$9p(+tPyt;W)Q$)l^$r=$91T_CPedUlUAQTsz~4rx*6bqqiv-)QZaP{?-LmohL$y?W0nat2w8ghY6oU5OF}W} zGWPSo{f0nrj-85h_*GqrZj;CNI?;K7Ycame2MmY^N}$B5bhC66Rp8PKUiJ~EMo9*# z#-2KIHBDncP(*WLCUZe$^Hz*sP2se3he^dT@oop%j~+#{BhH5SUa{DD3H@Qgj{rkL z?=tdcO5Cm;Lu?~M&#VNdw0Ni)o&ns@nx+@{ zI+GMd95P1DS!F1*09)ufW}=a@u?C}=5~q1VPISSHx6^dPwCi!k{QHasILpL5qC0Lq65#Of)SP%Q zD4ig!NwF=!$Pgk%+`DKs06m~2oGgVI37-yUCOX2@KyQKpBI|@9$L?vpgi2x-ORCGK z2xz^2kopyPk#?{gEuCG&*zDD>q8ImYPju4E;KB$%UWgrF)RbHhx|JxdUo!8j=B;Jo zH$*rsH<&o5Eq=?1Kn!2P4GRBb;x4N|=Lq&9-gCh`9QX|i4~fRd(8ZdU-7uUPqtd0i ziaT2zDEZz{w8gP#kVS-R_J&>&bO>7=K5D4q*zC+T(Q}Mr`2^1Odc~9*SSUfy044)T zWI+#D4^um&Yh$k}as+(9QXF{)S%xM~IE*hr+vLkYN1hxG zygH6(>DB+o+MC5%mR|RLrW_;!3?vF{Kg4hp$$N&` zd+v1RdC&9r?RlSlraRWXHFj5blf{{m*rZJ~B_(krLAEnM2;c-l5I~6oBS4<=koekuNRU}@7`qBA-%b7pJ+Ki)VwDdMTt=ewx-@WAS?LjXis1_^$&o^s z%GV=apaFZkkp_ybwP8jop%YvZGKxwjOZSd83I^IUV2di9Zne!EMDlXvhgfRj?IVgu zh);K6yXcsRS_Y~biF#G^Mnw6J=ukK1wjt;Q$J?+o6TPxYAHoHLF#^b-#ISZGUZR zXScII4rttgGuZ7?6f;8DK|leghQ48KjaEdQl3EScx*E>|z(k=zKfh`Q2a|DjX>o&a z^so<2zeB+c#{<4?y?Z!nyz%wDI$uFAdUt*6{-CqI z!^nd=-IaBG5{b9%ch?%F8GQ}zMbOFWmj~ETPgFk9-9Nam=jPCiI03G!N1=MIDA;hS% zs~XkaqiUc9h9N~sFs6c8b;vaCKMke{ZQyTr`&+dCA|&lKZo=GpkoY zQ4KJO5gB`s=WIAK=ZYdx*6RlYL#y9eCGM%Z`A{GP0^Zwb+6c^Pc>&U@tUMU+URgv+aZ0C!2h;}*PB*QqYbQ`A2RCPhu z_IC@oW~yyy>a#bXf05Oz>3vjK2Y2>2U)@AuK||VVJ_y>6UT^j1P@>VqrenaVzybhp z5M)EHV8tumA9{7SJEQKB?t&T<4Tcb~Z~`ASTzket9s4vzLW-dO9`mHMcO5kqmNFV#kHaP9W><_fLdTa9~_u6B20iv&bF z?7q42*6xFiowe-;cZd;2hQGf!nO8^^^jJwK8uuSyiqRr5_K(&HtWKq6q8JwMS#g}~ z3o6P+Rh@&qBkHZh_yBY$4hn}G^;^`mvMn$^AGK-$T2ZUSut$aqRBeAMTJ%uGRIc%& z6!wvut_p+6CRd_ft{$miQjP6{TSO5D%AnaJThp$vGmUN;2Us<0TwkFYi(-$e@o62! z9p|f(nGl5X*z^qJa(C$4RLy?+$ndg5RjwIGg)gH?LRIaA&X^j)s?%jvq+P|skh|3v zkxfH5K<)~b2`or##v!7bcKbF>+X=23%otopf(i+rSvMIa04T`xKv%!o1z7 zTO@PP_<_r)wue1D+BK^DRh`{b`5HR5wT>9dN4!!Athqb2HdCFbU1Xvzcto!nTj1VN z;8Zz4ZRcoZ9d@g}(H{+=&kh;gOQ|49b7bzw?5k$ULHFQ3BnM@(yl|zC>?u9Qlp#Fc zzC|NYD#Ji8m4bt?m+F+AAU%s*T<`KhW+9P`M+^wM_JkTk4 z?sxn3mDcEH3+uODGg?R19_{l;mO~N~#;$JF*I@CHLENLe{$_o+S`WL%q7p2U*WS4c zKC6=8^zu9GQfG89W)(;RQW(jcIjTe^z4px(;Xq^UVB=P8Wq0?Djm>KV9mc5tMy>w& z-f&YNe5oHYuLC1e4TK_)U#*$D!(+1etExVTITrM?KuB+)v*<+Y)`+$e)ns16K41pc1qu_vA)*?v4+&=KQL(EknT|sm zF&!eCXpR*uB?7slilE`*@C?+fRTBs*Ny8Dh_yLJjG*`s;gc=lQR3oA*n+DWGcwfai zU_vfV6>AdyqOrsgRtBG};mRR@mn8KX_pQ2=yjazl5e5%+2u}p6?vgR)rH~4M0xNXO z-=%BAUi~JZWt~=T)c5rv@_2Z{s+;=<{Y~0;toxkngC5QB=?Yle-Gpm>!{6LfswOB^ zXG;B zJS(|brXA4*sWL40`r`Uw=O$0sW+((v0!3|o6#IaMgQM|1It~+;*FAFn98m8>E8EP21e)FdL=&UYAaTPQ zRE>OEU8Rm(o^$}Fj$|3HMgo((Y9;k%rK^5-h&DhiA7vQ`wrS4<6vsMKumZ!-tM`}~ zRWUZ~P-8qg>iHd}bR!OF_A8AUn3{nF)Zn8VsYKe94Xo%L%n7O}4h=uVi&p)e_!7MC zN0lWcCE*Fc{D^!o2pA5aDt4P=$HzG2mFr;BY2mw9wSL+j9{?b#)CvV`RfR^ChruN6 zlS9F5RzW|9mA?c&;*eBT7+55_^n$tg_vkm+lqy{2sPjm0859k`G;(5S8{BN_-R{sw zxl5{pR4FfmjfJDXvw7n#6PW2l%Jhp$6t>de8Z)1SJ`OdGJA-^z_jDSz^|tD@)dy9F zecqLBN(7wPuGgtiKsw&pr`B+Y4sLwcGvHZ3fjk@IZnmf!*Y57U$*dOiRRMbKswp`- z@q+>L2k$Ba@G4u_-MY1MukH=|T?b@*$P~Oo5XD++x2h_H-e{5NU?Z#2IaYm(pyi{o zI2xW*jiS-rwCDgsDq?p;J9FAPlEQTMa`Ht3amNGwJF z9;8@XlvPvN%a@aB0Eqhv*m;gH5#5C(dD(qe9mfL zaFrh^rD>Z;PbK~xp(2u!id8|L(A;7^a5a~f3>PP^iu#882O-A>Nj;fd7#7ZKRh&)s z>Z6YCwCp9-vqxW)s@VmdwyCY;k%2$gF`H|R&RUH^3Fhiicv{!`I64|u-hDt{d-kPu z_ls0S9#R|#n?wk6&?CDA1jksl?rY#>Y)kEE4X4BSpmx5wH5{Um?jF5e@&1)7MXrnu z?J$Xg_c~Y^v~TF0_5Of3c(?_p(pW{LUapgd+0=V041;GZrqX`vlu8vk)U?_vBk|~t z)7+FCTD17P+wWG&qdKEmXSLOC&Dh-c_ud-Tw>D_9>T+uOBWi6b3WTEJ&Zc1_tR;x1 zZxsqA(8b{M3@kmU-#OC0MEfUN{P*0obqF-By3PbYh#*o?m;=re?aB(1^lE37lB7!a zRSg!Q!3I$Z+d`~EavYSbZ^QB6T@E3yE0r-pEVKnR6nnjZ5F4cp!YLLHsNAC0G1Ph$ zesjdRVqiLfng?l7^Jq+6-{EGTUL4dZFg%GUfYy?ACiDz$++p-42|#9c103PAP)EQd za=MzYtDdB%>!p_kgi+C5^e7Gz;InhV}$@>R{_8 zUU_@t#@5}#wY60mVs~1bq_ArH_p3>i$Og8Dt#z>mKxeQuZCl~LR_XUkf`hQUsEAC;iGwyBYzWb6x< zZD;SEKq|90sNx_D(7|Z^zIy9(&F#0wFj!;@x{OL|9XK@or<)h4XJpNk?zEBP5PNeZ zY8wLF1$lykk@LU|MlMQ<4e#5QGnPyTz@dQw~8}5>|-JKj@E09G1#tBJi+Q7tT1nqIk9wT-Kp3r3X)(1 z+EoIbN*03g8XX}9n4v0Ft+cin?@OUdHR^!zOZ`1`+I942x;6+?*;bVW0*_xC*~33ZFoxue{Q2Y6@dEDU&!fD{l|Yi2)<7W?6}05$00K zg%3m?y0$|rcqS*4f~z9_c7J!URh=I$CL_`+Urt+yIpF|~n(s&RQD^Kj;}eMG2$1MJ zgzraT%j|Lf95TB`M|R$eo8r-gg})AEg)gtVOS3VEX5l;;CB(xc7s}m`h6z+u289&#$=SWb%(5y?#WXi=27|$gepmMPH>(NjlpsOxtPzN8x=a~`(WEhLzlNll z9t0{OH53%x`=F`FvQ-frC8zW(&lQjG~8VAZf2vr#uqnj<-D`D^HcL8=+iGlEp zm`*B(=x$8CZf$3616$srR})6OlKxgut`dW^wmJr+ALXSC>ZMi{)2(XMGEyj8MFO02 zNTdA+S|20K%U0dmImd%4{6Ao5698hz$MDdIHY_sK@5hXp)Vdb@3qPjz~6pLH6?FzTRcb48WbA$@-c+mC3)y7N)H zPga!>17)c=STMgGn7~eWHZ913l>|Ebtm<&ykpkuXr zHGh4}pcOoszyabrmL1HEcLm;3F9yxjy2EDBe+-s`-jV9z9^L$i0WnIIK65R5Tc zgIdFe!G`w9FVMW4(hXWgc30{Oc_9WF?wKHVzKw8|d<2g~X!23&hWkw^40}T>zAEcL zz)%}+K?Ku#8j=$tkg&6Zjt-=R$9i)YwjYH8)vDYR2&**)1#FOoWU z>jx9tsRlG&Pw3JKPgd1a0(6LqTS)RrYLk`YEo!7w56O1GfqZ)B==If>LIv>t2(^9iFyJ4!e)S|r`2PJyffh@T*WtF!|Ti*uup zo+>?tFCxbw5m%!GgDDF1g2RKev9|^^I~p*IjI>dOn(Ab{8)NV~`5_X;w9-G^yxW13 z!9DDoOoifG!AJzQxGuz0;5_)DDhr2Cu9${BWNsL=sx>)5RYmGmq7k6cN4+ptTtL*yQ`(Scy}>m4}hBPJq2Whz8YbizdDrIHPXTENcUU0Z!{YY%L7_@=hC zanPqaa;tKsH@1~#av!nWtu<|PUjd{XtQ_vGcgAB*ZKG*!(}f?`)OhV)2WxZimgc^} zTy!XH!yvWRV*v5tecB_z)l<)lFF@VAH!x_lK~P?Qa9h$gsH?yqb-Iv*jsBf`wJlr* z^|q_P=QdO1cYgB$wu`%rK_KktQ%I+DnrbbL_qPAxZ*UwbuKx1BN0%;jV`~%8&84gA zCYVaurw!^_*8PgHbElc2MyH27Te(6gGN9W!;WQhfwd_W@;s9fl6TrQ~Y{GU?M1j8|b4p;R;sl(x3w9 zbS0sLjuB7LgedU27g*<y4o^fh96a&;k_iHz+x-a*(~x->47@HH@#n23|cPbI#Ns z_)S`)RN;J+GRvK1cUz-{CnyYs9TNp1WT^WBq%fb5fi-yewfzAsI^)+jY8IIehy@Dz z2$iAgVIFtZnmbHN-a(K;;~@OJmygI{^zVr3xr5ddO2LF2vQHoe(t;JHqB`TD+A zYx_*(V9=Ph_l2ff1rvLNxa!Y+@am1*4{qLilU@)|lH+aKlTrwJot>{1wkli46oI-R zT1nzzzRAguO)-ZJE}?;PH*R)wGGelM8bTI3L;H-CxN*OHPJqj`oTOa?sKl4)`|JYx9(j4%>Hm*!{>z8`_p2|p|LC_q^~%4)Lv@+(!jvS4kKWqa zFaOFful?nR{)7MPjehp8`p8eL%2$8xr+@CJt&jaQ|NP8Pz4Gt=RiAG6H>f~XcDBE4 z{am8i&Kr5)`iAO;wy6gfx#jx$rCEetw{V?NsO{OhS4z9DO}%L^@Pe<8HIX~g2o$GC z?A|{;aORF4Y{~GJ73xM`h+aQ1?AWpWRGz8Bm!|pEf#^ST?V24etg>cXa7pPc<0{d+r7U&oMmPIvP8k zA_9MGx@qhP-BS#|I~I;@*@0~ZQ#+DIXKZK5i8BktxiI3T{d(ZnvQ5j;M5u0i=7(eZ zgZ&?0T|aGk|7i2>OR*yBZ~kFDZrmy#>c9K!T{9A2e`E}631Q)9pY42leERP6`suQK zXZ3d{@%pTFF??En?c0NNaI#h`E%zJY_l#z_|C%}2c`vE2%T;6Wo6ViO?lbrDh57O< z-_VYB=8Z3&*4x{^qCWk?-Q}Vix?1b;*$=<8eHOg9w1S6c-R{M1W20Z+comK&x}?O# zoPX=Tcio0E{A;29C>KA72anX&Z~rjX(!-&$xy(OM?pniM5%0+d55Ku*U-a`IXX}>? zwO&7Y&vV*GUr6+DHq4I?_J8~M%}2$bJ^cRJyI<%Y*wb6yKUn=kJ1icKw+~<4@tpcU zS7W;~p~s^CZ2NL{_V8l*ynALg|6*GW&Tsd|U+tK$S&{O5d3V-cjK13p?%e3jMhEwk z?aAydn?0$Enx&7bmqRTH6zU|C7;w_BkQOiA*laZaA6^PDg&P{!BHm&L1)v zFCC4`PS~9`2Y&Kwd0G|=G4D=&y(L_GDuzX~w)D)Ih>T7zX6%ypJPv2!!jOka(d(Gs z$maU?*i=nrmd00?9m7pB1`0p+OhtOjhh^E;GBvT&Q)m17d_+y#RQ8x#qKwoua7A&N z*3V3l9!|SY<3;<2$^3@;c6yngi}W|VA9RNP=Ew_XR*`2z>%&ie;p^&WjX%A0`$ji; z&ly}9{qsNB*`IxV%*E+!D(=Rz&~CK!I7yE6^s~QjxrfWF_TE~sm4-WS_j5f~)8q>y zk-WUon%w-J_07N2l(*L=Z`!35JW&m6;dr5=uZMnJ(^S)X7HdOe;y2X&V!Sj$!yO3K z&c^$uF&f{w=crpF-|G0mc${RvZ;caO8lO?zy{KtOA^J0aUzgvto%7-Fx#^wS!GppM zhL-WR)Bn0h`*lCva5^(q$QxMQZw{sXwf0MA{DTL-vwQL&9pypu=6~&l|NH%SqpybP zwy(O4`m^TcJN5ttQU1F-ZPWPG4}bLFwEe-GZ#wr^XYTM&p8c&aYG&u&%l1NlRNVYl z{p4{=?QeOD?GaRN@$-ISm2$J@bS$+s z+guJ)XW;BVJlZ>Tqud{jZ140+yjD7PLvt)q3NK$6c5VdbW%$@~24Of=?l(1S!rJ@w zvFA66`7f+cj;`JBe+_ICBkQJkz(NKhkH*>EKsq|Av3^ zv#o6O?&9ChJU?=aU+?_+rLL!f?W{Vt8?O%+qy7i|t;t_Z zAO5o!gRC6wTIs02pP(qUq0~``Cvm{ zcmK4sz8Lx2500e0JKtSg8i()Xor|xSbLIHM%P62LN4#`SPqzMEU;qAenj0ta&Ccb! zm->G2&At66o3`=!{zjp^zP%@%tw#ff-l?-COz(^+$GKDy8KOyc3J@-}=}c z<)w|@G7px9>$pL_7+ZaRG}P6|Hw`nS&cQKs+fWkEGCiXlnu^&|&9Pw>&HG#Efa&7g? z**o|5qs`Mq&Z+FSj_xn9a}Qg%NOT|4eJeK8*_lwRJa zTv7_P6YhOu*t!bCQgNbs=}qL}FltWU#z79B9TiqEeqozu!FNZl=8TiC)uWy@*hr=9 z<^!kDR=r@Ec2a9-?B&uP)O}UyInO(3nFzvb~?%)%;p`XSr_)1 zaJ6tUDbiVDKR>$k9MkbK1`Zg$*H0W>v8;XHNgp?Zdhq^v`$WlAZysrzi(xK@qk23F zoTMMxQBR2690+NQ8iqZ!NJ%M+i?vj%I4Y97Xyt ztND{$O|9i<=6H^##F^3`dSP=GsN;p5#l|!qPy1Hl`+eb8&AyOaJroQ`8!X_U$`8&e&cLq)xZ4uu$jC$x4tuIJlZS&anFA7wZ~ypth7Iq zk2l(3n?@ABpSdsl!utMKpDvn@Ch5}gZ@s=E3ia8{){d(z@`pjyM!*-&ttdJeueub9p&V(xvOiuH|@gH`GmI+_$A& zm`-3iSC;Hwf80ybbgB<}I6)^F2`%dH6uD>|?w-agP+Cfe8{_NW1RB77h<)9mBLxkcRpVhuI%sS!NeD~-mp@rmg zqop&8wKMWfD@e`OE4|<_>#yp?*OE=Thbx09U29}RJ-V6Z`N5H)_S}a9XVo{GPHQP> zpsFs`Qg7x@o9kZNxB5bNyT*t4aW8X@&AF9%E#Dj+_*vH-YN2~9Q*CV46(i`@v_!F= z*1}hVsp%aXhJTs|(ZhaW=mVqoWZtm7Nh#%Un9MXq-jeOmh+EUcJW?FZ@SMq&)i(5T za^wt0h1m>jWZYJ0XpZ`JAo8pJxq}8W0%Zxj$MX}%HLXl0`dzD=CU!EkpUoacz>3Fb z{bo6~y^-p4T9ZW8on97{-M~~Qa$L@hBs=Zb&zGaR=Whh=iL?J*JD?3t-;BCXlXQ?B3)n+r|ek>~U&*_UOIiiF!W#g_n~9SFay``q{m_5Et{I?Y#Pp z?s7ZPULQU8`se-Oubn?@uYC~y{h)l(yI9^>tM5JfyqS6jCtrEC`go)a8q(>ekM5mL z9{aIbs`;}c&-%4i(aVnNZ)abcCauNkjj?e>-w!`)G-a_TjTWt*ca{3J12+ry_4H)u z(D&KsCCWczt%FYSh2zuV$WqpuTf*e1?nb)j%D@SxzptHimgAiuGtDOb+XwTS?eFHR zrvBq1A2Nj^GL7O}?j%w+j{IIW+dddPI!x0n&YP>nzKf->Y|jg9Ten^8ybY>1ZO0D+ z$MY>vnrXO}E;PsXT;H+rpq{W5+i)z~4J?b#TCrm}gcFu++os3o988ZHxPj|)R^951*yR^rj@ICfW$-3zyr%i*c&w4J*C8S~En<``4QjeQzT zH9puGzL8khUmWL_ard5#jWP2T&HUL$H_Bg+^XEs%hDW+CJ6G8^zcfkziA^rVoBTuJ z4BEm>&ccrS8wbX}w3;t})?9u){Drf|GkJIRMt7~@WOB9Lv-45pY4h6LEXAJ4++eb! znkUZssVf4l=t*4JJv#`R(^0gD#oX^#%=@BS&eiGYUs)LbtHa-UB3`^~B%OOV^4Evo_jevH*T46Qry;eJ5B$*+Ms>cn zy7gN>zUF>=`lIfzUZ_bs*w(^>&kUt$Mztr-?7Q5MW9#u^Woh>J_L=z{Z2wm8chXFK zxnp)}NpWEQrw2PvUUbgC@rSLY=l9*kQTHD_>Rc|K{g)@{_|J!f-!Q+am-MbUd*N*c z{cnrpy_NG|>)F><;LuN>&CbR<+um;<|CaLM$?EjQlezYz*LQF4eR+5L{oe56-P%cC zyd|%`_K*GdjNW{_Isp-tjKQih6kN{mrM{v2r%3FT-Q6y@bu*s9_6l$I4=D@=1>tKYy; z*oH0?KQHXaP@hFb&-N@sBT9{bo!yA$f9lMA&vs|7*>}}(*GoSBYZb3xf9#b{{J-)U z8w`*738LYbE7s!l?hilx%E$jd{nek~I{ZKWpML$X;5vTk*M5@g5FdNx|H*amk)L>k zuU-9AMRok#Pyh9w{_B4w*}=zug6z2Zsk?{a)urj!X{fuEDKM}xu0MCAYX)H)9z;== zCaH+MND65ivTu1Jvb-WQ>v5Wd4<9Gfu5X1zgo*1&5y^hdzBp5D=`V}R**MFKW#EUw zSSC>-Cb2NHC=Ta&WcO1MdXb-IW)w=#_d}_rLF~EF#P!@Hj&P)|=NE)zwja(l%xV~# z=F;<|_24!4(aVt$dZ8PcdKSq*I_`xkv{E>s^dGuY&${}qvhnRuw!A#b6E_dcB%Q~~ zVVvhlTl0cRwhx)QRMqZ zVfB}88Bbj~yPrz_j&Oz#Bj1feGfH~7Z)d3!#H#189Kwq{8Ot~}vQ(X#k;HmN9xv*P zI2c@ld|j)a>*-9KIl39E@5gE3Ei#c=juA%D)S6-SZQ)M@Zj!$=ni$bBOXBjpP@1jY@>Lh*;gwHLEjp=wsM>Y0)Q#QAeJ^?@0vn$%`iOBii=EFZQT-H;BdAkWe{jw1YR*@ z2L)W-REFB-Zz)?boML@ zRZnuAIr~YVdZs)tW804GAZ7VYJq-98x*H|Z^az{uW$8Mm>{=qTdG(VgNzBnsVqa$$ zd^eDm%R@WeSUCNtuL#h+!ul&5u)xd1!ZL*I^J-?8>b7kRbeG#8{WuKMMCOqb zsJ^2BZe*Ca7nWu!u9v`za*jD*PRWvlk!rhoAWYNGqc9=xj6y36EGP7d2SUz!9HzM} z+^Fy|UqS2&Q5lH7Du?3$kj0{gS}5Fz|N4=fX0h9`W#25KKvk@fFN0Dfo;x=Qtyx{0 z*doirOG~}f`;l~b_`q;;p3k?_G?b~Q$;`pNSloihcN|v@@^Q(@`hG|0rjYdp5>li;*rdAY039WRw!9uA0otT&zkWc|#EyBIPvlujxx zrCx+imW-_g17J*Dixm~6ttZ+<1k-?Pkwh%9f718lzQ_8`V&hEnq~WWDGd<(%$eCjW z97w~BVrNp>2pZq&kueTzClBRh6r@Ew%;WKkFiiJt^4EKWV+i3xS@%tn?kV>=eJ+$g2%sRHYqsuo7zxQumEvkA6r<6=*y zm=QM}pGc8=LF8w;2;E>jXU+J6D2SX{3#*<^`8~mrNz94tO~dL6MI0dq3||CdB4b~; z39iRUWSDVj(k$YxMYe5q+>j5rBg3=YP-0X&_noBllL9jU{*P=wm?R#PCS%ARZguYL`|+Y8ko{arFSevVJD<45(A9Ih zHcpdSkVW8-i5IEeMD509nvb1fa@@50O)s_~1@MceXF8c{#{n50$@dC7(xdKCc6?se z2X<%I54GU=yZWPd%$`1;X5nZUVT`Z6k*wbi?Xs2ksYUp_sB`cdz#%{Ol)7pG@!TWBVREqzdPx_~apF;5KFn?zanK({lQ&fV{P zGkEq=Bh${k_H%*J{OUYhf^_hLhVB-jmB!W}ipQZ(&}sNlh3f*At3rmJYr9V9Dl#;n zaf{fN029;2({NdI!c^U;;>bv|Ow}`hZZO%t=i5`PWM$<%c`BnzT|oR=3GOt_gw5*4 zpvwr`4DlUcUPBN8_BqDLMY;=7)5JUhP=vKK$ZjhA$hyq4qc>0HOEFs7*^}di z%l-7?QRs_cZxPBdcHL@uSLx(PdnvcXhe)}ZqC>Us22&3s|VK^9h<`R(NY0K;auyWuyts=ybv-Z`ki~?|06jaD^Vkxl-Q#XQiek>i^1%&{ls}EFu!wC}NjxZ`S zUWLFR#ve~peU{oKKq(x6hB#87ai~2!g-1~0sjll5_*|TMTKe8&RWFDtBRK?_rh-p! zlfocKK@1*G9Yhf@1e5{#_J9!v4uO>emP=oK5{gDRA4ZlogtP=tnXx_7NPW3^77br6 z!nAZj-XZ7D)|Yl|hUpYqk~Jn1U*?H3G9lvBNv!M*MD3)H) zh*CgSrpHh9S?0(r;zeQ*$gDY=Kgcy!EA_qOgqya1W%p!0*-aP17SBT!3vZCa>bs`q z53L-WXFIsWg)G={zvGmoKX6`NTH@|gUx^F7mrb1f-Do)Z@8ZUj(Dt$<43araqo+Pk z73pT-<)s(SrVB2ve+D(a>9rJ!>)( zMgsgK099P_>p|)2POfO&q*>7WK#ot6Nbw)_94sGyw>M2OhbAVqX>tgxM{}7MlL%<; zyV6x7o;w>@Mc^#bNFEo(}`0!Va(c8SlPWe9!|%~c%+@x#Kt)>qwvx&@VWPLcplCa zjH@IbOfx%l%*VPN((Y-r>LA{02fG{?xxuf-!tVzWvdCmQQOAZ|;7Z{bU4 z7MkS6M_TrtS8q-UTJ4#Wq&iH8eo^W@2j`t*DNLWhr*8xyD2`j6A-jk(&vV=H~V?=;vr@9~$1cxD9(54`C z3ODWJMX+3^eX!#ohV8XSWb_h{UtAKIXWY?#%RA8tX*^exU_gAOFNi7pY=X(cNs?cc z`lhlx=In+^ScE~&HWR<9I@T-FiqcoAWx=(2V?UFT-8-JmT%ug~2y;3SWNpD*foF4G zhU2U`)-@-0JQ5Ex7Pt(Yi7h#xid1LdU}^f(p~X+dNK53gfqnB6&}juuFNWFk2X>%BkB$Tw39A1OMZNMr1h-(94|yh&efA`kd0UbkdiR59$iC^kjedoo&tQ zVg=n3i_Ldk-FknI<7ZD)yVNX9mIRK@h-<_I=sLya`^UZDj0oQfOhD;*$g-4CxD0|K zkY*WeJrDJQ1X@(I^V~{8WH64OxM89M?{+h6vJ*PJNiYH`0=zEFJYx~e8JEg6-n#3n zlT(erZRzWZ>R4970%~UB3r{8dltsefA`u5jVjX~?EQxV&XhN|-vOjt5ULZeMgo^HS zNPUqG0^kP7IL(ODqwMVX${Krk*UAkM9#sw%o6Yu1H;hahCl*?P)WX8gO2vkxcazk6 zxbQ~OvABzAlugRG$P7p=+ki(?bmc}pXv>Ed#DsuS;uiV-)Cm-IR%SZkroh3+KR$LI zMLlm+703IK6dZse)v{#B*El}WMRtK%C}jXlK4*M!ol(5@&efn%TtHa`Kl?Z~28Ur~Pme zRoBWg60vC$CeP->M?WMdh=cPV?h=}l3@S;C6EoPyR(QpY3&b#BUjv)lTt+IlA=E}* z`zFMG!=h=dx zId)W&bl)T_%Fyii71C{tse zYKr5>7he1T@d~~N<}%{V70aJL2@!n6eOI|GVTAHH0R1PpeJRF+C`2-6#gF@3rZCFP zOz7P1@Hm~kYH`pKQ+RXymzYLVSYlk6UApeVh0Dp#VzF!ZW{|^qLSKY}fTS?0Qv*|@ zCaDX1cA2FE4FVjHW_R+GI50(T18){OYMvbnh`K>mJj{(Ku#+hm$W5S0ps;i{Pfj$W zonzpoJ++C}JH%Iyb1}2(% zau$u_T$vr?-gh=8d%et?KfH{Fr9wh6_^n%SzrJ!|c^9wkeedRt7mFzp8C!7-tE_lM zp%8gNteuX7htnj~gH$mi7t&5YRfdu6L@87{thnHknHjm`Oim`o@(N=I!1-*GM8n;uyf~x}&t!EA>1zO_P|}bfc@M^(R+t-EMMfUGX*hueCcolbt&x#wp1ttPrJUHX+DV2vbGSV+G9p)lTJXf! z!0qS+R?-|d zOven+cZ#dTKsh?ee3Q=-*Cw+Q7q`KGVVI;xc9!`#xQA$x%S8%>8fBmcMSxN{XmXzT ziDP8h6jzXmGmV3YZNTgiTRK2Hks2P!W=y6k`mZ8H%>`_AoWi8ZQp|5mAnUipXz!0c zYp-0NZ3Sn(7NhSuwzYwyFX!^CaJsr&jcaSse5HqhN#z|b zZkg!GtXNK-&Wn*OI+^7s&s}5s$*Y;26N`*8N7oE4q_wOFd_O~ga~6j1i!fUe`;;}Z zcdpGmEd&oaJH^yb^msl$9*QejF%7*^M(Hqfn@>xP81(X_Jv}?muO!(XC-SRgS1eGk z5oAn_(YFSM2$U=^CY2E;36Siea#RI`#Wsp#*s~&1%Ly05Vw3w4^_`Hu)f-pXT>6$eg8m z^>E|cg+C1w%Y;?5M9`UD-07mA$%0YzJRtKIt*77rWSd0Txe`DX2sIzl*33mAtZ}2r z4^|9|P1s0c&h-rGJ|=4<6JY&Z_cRRx*x&W?BJJD99sR4t0{G}9qcBWzr%a7aQcq@C z%%b>YF@xWVLkl*kVr>%D4$1FiGJcd2_FAfIUI^C_3lQp}Vt|)2h8wL(to2U30y@x*9$|Zvv=Lk?W;rRRd0dP+ z;88?8vIRMjmP4tukbfswLNY_b#Wx0v|G3QOkPN=}K0yNUnGbhWuI_WGCuk~1 z5GRqjWw`H|W;)y{gBi^0D2j&;{FDrBlnvuBh7ag=2gf1k)|yO<3>HQ4WQKAsBICiJF4{O&+K#Ztaf`@m z7I`X!J(+}-?ErPoT^Z|S&ZKTC{%JN2U_?Ebc7h3+CYpxD42cogbl(yA43>*1$l-{_ z23IdoAR4PE)yc{vFa+1jf>mOZVZQ@$QlH@eNYoYvi9A?$k&wzI1D#;p1Pq2VCtK>1 zK)_f|B&@dNZ(DORRiRu+>wOzBD?$kBD8Q{c+NE6?6joi;uLOO0$6x*RcU~zn~qX|SJP)>B81CR&&IqMPT zQs~LVATT3plW+%d=yR1J8v%i>GFqEXFbugN1%c6jp>o#h-c@U#=vH zUn#+Hyf16Qg%T$FRYaLh;1?&v@ChsD)x8wDMoZ%pY0V1Km6B_5o|wcy073u_^wm-Z z6N57nKFReHh#yCFAuiw!h;^mIuk$$)&pz>(FOIcf=IGULrlBz;hXLLtp+?#qS(%x> zBpnk@gbP1GqA9#eySFuPK6EAa8#2#uSusj5;GV@~!rKVPAk#xmr||VG8~|A$77C*q zAiIJbL>WkA1*ss_82fBf>)}_63OdyhMH7^e^r`q!kuD>>U45@#5G{ZzVT?sMcWvKi zpOuD0G7$TO*cD(W>l{`bHxZ5)a-xa%G?^qE%8=#7Sg=s>*v%(ol%d!KJUjmeId8EH zU8r-J3|r*AapIw}4S52Gf~1Ui0r1HYVsl7}k`y815-wYwmZH#e#G^D@_j!I#deaP! zT_SWnojM*5+;ZAep3V*zeolCi!fORsZS2c32}V0a_EJCbrozOicW0uUz);HRTJYez zr|DF5FVf39HXsVtX3FpS-U|nncuC5@p8HcGJ_19|M9+qEg$n{RIV1to;2Z+S7Nal) z36kPZ6<@{NV!+J6BvV=tyGg4l$#y=0#8f5P1}q*17|5o>lNmgrG^6xvsBjxf9@(mL z>WWB}1z|>TM&22_5CyTHk*dt`NsDO8_Gh7qcUb!TiHFqLAc9xK+hjSAgGy64odZ@h z+WMs+Kao=~stfm1WJOAnEG$!U*cI`tz}i%SxzRC!XUVc!F*3`B9?hQTId-6BpFbj> z5uCxlzDSLplg{zE1QX)qGVz~F^+@%NJmQyPq`JrzD)t-IfppN^DDFs%_hfOI=t_}X zEwU`fq9(XCqWf9kqRAn@X@p`aV}2~qLrejIEbmqwQfbZ{XY5b-H{@2bq?i@yB3GIw z*WDVsr!P`Ram=AVeIDB*C}ls&9L*=^BLb8scvhceBe_CKm#iuw59`36x(>%A^5fHt z-^sB`D%oRX5{^68_!&-G5iEI4GF2%^1twRqMF$}RM5yYkBf?s;zTpG{9|1qc*i{H1 zfk46`Aj{J7BTj()o^HjilVS+N32zh0$#|?FO*IF^ux>?tkU}-BS>MVyGBELEL4;S# zi;G2Gp2a7D`Q6x=!_2GNquduAV<-rlxEydvld}ce6}F})8;?%<(i^7z+~jm^odh?2i{FOSw2Mj{Rtl17wiAi;`#d;=C{Nu;vHN zb1A3ZSn+p{)UJ~_XU54(Ih@<0adxB$qP7`#b{M-RZ|v%JG%NBi^^SIubW`(wIE~K@ z`Ot9X8Qh3!t14T1e9VHzlOjOmfG{T>M2}$X4Y@ohLZyUo-YH)?>>2ws6OvAy?Q#9ZdWFUr=|O35iKWkH&+Z28e*1_p|J1;|57`) z{^srZ%KkSWpJb3_dH%_j=9VHlEdH)t+H=e;3zSSQaa=gu49R#f&aVq<;fWGgxM)PCD~Ru0hlnnUTDtU7Pu%1 z+=<0g};&6Xw^jApMY@FIQ6HBg|?c&-bVchEszm%J6Ol_p02+= z9L46^qf8uen~Lpc-bp7&%-}?o*0E}(T8?eUzQ@r;M?M+@V2GN+g~ll&S5E2>LNJ8p zKr96Aoqz^2urYit|M5i`#-4|NLFKUZGJOWs(|5^=0`H)Il}WN#%VM$v4wNmv*XyKL zkKe%w=whjPs!c1T=qijkbyaxE@xVnz;v(_LJF%X}dh1PCjBvD~X)~Yvc&~PRVa`yL zQ79CMGzsFkL7pWF=HeLm9L&z+ix~ZQr5{Xj>n`g@R`wCUY>@IK{flrAb~4W^MoOyG zxF#-1x5X^X4Nbv&CZb~}vk5mrltl_j?2Wx`C$KIleM#Q7m`dr}Xtz=LMF9fEx%5H?%Eh*%Mf zdBW5tWPgC7zAl{j%0;$j<@)1Qh^|1-&C$D+upe^=`bKs|Ry9Ec3uly646HEHIeD@$ zRtuTfQZL=ui3@1qS%qT_U^E8E#@)8HqBiURS1x$UgjRx&dy8Y3rc}{IF0lf zu~A$+Oey~09o)?FrcmGkZvJw>8MBwCseV``6M5Bi4!;#S4wcP>xuQ7EhiiJ6Ptbqs ztq7Csxf3vm%-|6aYPk-P>Da9v40%lzG?Ry}xW(vdl7Uf#+fhO4UN#pRW9SIaKpRaI zv3$JBso@8w@62@0ZhxE|gP8pYxKlxJqN^%qN%27ZL-LM{4197^wF*{;+v4$mxxBt= z#ghnwlFo4`_)EZzH(TNgIT#>&f+>&|ScrEA)e=eo4>|mfEP%Zb!jiNe*$;vjQtl~B zO6AI!c3Iq^jBEgn~r~Eh(3YkrxAc>3|qy$e=Nmd-U(o-8Y`%`IZVBJMT zM!THFzE$Z6vXhvP$~;DhOD+@d2i*eD^Z(D<-tdedfErX)(k;mnztbLK41dDeU1_kDZzv(8Qqhoof5QViR%VFiI}pmtjX z1zHqH3-m?)g1+dB_Hzwo9$OsF3`GdwNL-zn^+s{Fy?I0ebj^~5I&e4p} z?@8Z~IIKeCiLPpCN)o0BKsZzUDi5kQZQ0)`p%Ccud_wrKm>h5il?|n7JgMvdju(1} zOQA0~#{=Y@bG#3lTefXFYqGlKTB89#%_G^VdP1oN~W$9MYU|?Io&{3QdIH5!jUt@Z#1z0!r0{ah^yQ_ zIA+Jr(lR!<8%5wwlRd~R)b5URghUkJQ8fJQm0m`rocEwm!L7g>xZXqCR|muc*pgOG zPAJ3Jh7+q`*Pw-t>?XE#mSZplyrs^k7vtVRvF7PDgvac+j@J(Oif)+#Y(%NzR&v|n;KXu>AS-;H9NO5Nhq9)Tu7sW`&TSoyGw;CF zrt(G&6IBP#DtnAL%Y`LeIX9$^+*b|LB zKuuriU_hRc8=c#ZiWoJ(eg+rsFM+RUyQrC5Z>grgJAnZ#R5Ud72L6=2pV|TfXZpEn z@k*2S9BIQ(FTu&%)iOeT=wbB&p3o2kyR$3StmlX0&|mo5Z`jbZcInuaXM?!_QysHW zTb8asJvvruf2SVY_2Q|=`U((v#rc~~cWXtdq6dYJcocN2us|Pp?zr}p#Zd6NBBjc1 z8fnouQbO4Zt$ZnECv0DNm)PhN5DX^eVeBtl8gWk5Mj&NYqFa>&9&Dh>S93~U*FAN# zb&yuq=#898uT#o+mv6)~+=aARqhfFu3Y`bXDP>{=>{$qb17(6!8F1 zI>PvfO<0X107gGL!FJe(9872{x@#Pi`K42+xo~EJ)|8MJWmcm`Z%_-k!4yaxVV7qv z9J$?1xzo2}t-0j*nu3O;aU4evU$;jGuLJ~!s0E;lL!C6@GtFTO>2|PmR{&|RF7m}3 zUk06f+D^pU1OLDk;t9+OsU2v3+>Qf%#>P~)Za7y878$z-eT2|=2|Y{6Vn7dt%9ia5 zP!0unT+`ebNq>y`r5#%CUeb3?Q30pZm6fh&4M68(!;ySv=MRp03ts})x;2eeAnC-d z%al!yYLS8yNXt)C)$$FeYpeTDZ@o4FBxcjht!m0vWOnNq=pB`g4p-%Yzu9H4^7^_z zDfas4DEf}7jK|bH)D@+oh{G;2hR0svI{wV$I(lfV!$!Zg$^ib^FsFq{FiPUrz99iS zP-Kz*v&-#I0e2A0;<(}Ij_PoPc5SVN*VE4H7FUgOz`06W!zS;#D7_{Vwc|Oj=Y85p z`jDWxs#0ZQl!x0x;jZ?{Pd62870~ZvEnZF&R`E%q0+O~Z7%F8BqOVACWV>0T+(JWF-gwSao=xl3Iu{U)6HI<<_bC7n|rmsom z5*t(0mYir+x>DL(MwV@v1_v)XTaJ)8=*3Yv&y!Jhm@SEQ4|&v7^gOINfPE@uRt`XOZ23-%C><#@3p)a?15v`+IWQAM>FmOIj}Ws)Iq(-U9twu;Ae;;x^hoRj zWWo56xol$q10ZKQBZ|_!+KMTA0GtkT8Pf^|2`6zV*D=~GIXYNIK?@?0Ll0RdP8-l< zj65E6GW%=1IZwTZ;C2lcb1M9P{--q3U2SRI|PUUw3*PnSqOelTDM5S z@9i#TTW+Yqa^zzkt#4_%R_aZyY`)DGXIG7tHBAamn5Y$MI6kazQmfNfc)`Igoiw0F z=+uag&t% zoj}xDqL*e}qGo50^0PeSLQ~5?PRvllCI)CKh>aHwqGSsxT@ZD#I~{}I#wZbH2UP}Y9?+w#dLSM{zJHL*1+f3VD0-kjz* z1E|(tYKp&%a#H9J)_$)nuYf?n6!D0-(ozYe3j0r{_ z!DYcBvIfH*YtU$O+f?I!eLQ^mZ`~OF1ppmJA=H+B`8##-_F8Us3$!G}7SRf0=;H7R zgI*fT)HOVpjY*Tq#t>>@%Rb_VGL#2u5-MC!eyTf8Z^BGC4%tG`NmGEb%OPUnoP>%D zEX)z?n5~n{R+Oj?n+Lt9XphG3YyuvM1C4HH$m$1MrNEII^+s$7Y7<3J4lm{F>S>NDd0p zJ9L*1#(^yA*I=5eLJ|<-wnn{0Ibeg4IHDqMh!8l14Q30-OGs%1qZOJhek-hGlWmvv z3MzuRK=|2UIj@W6Zk>7~N*Z7Uih9UsSQjkK8JHeuB~76y(gAKHAogkGO3~hedIF%9 z6PkBPXNnwrUB`G59KEro^P>J5Lr#SY*vUCSJa?cKI<_G7*k`E1IZ&Yyq= z4`fxi>|$y{bX%DJ;0i&$lWc104ok`JEZ4f!V8;+CO4DgD$6Jg+IHfQgt^FgF4d2T$ z5O=hyoL_JXAg^w#r{~|`7Xo-RL#3$?%TxFD^9Gz!tpq-fV_|OQGEbd5!u<1$QmE`!Uu9xK0jl`c6K$aCFH)1@!MxbC6kI@d{|Q zGV!H=UOQB9j{}r}f`#jE3olOO;#Fm#;8iO;NK}vv5Q+BR*l z7cZ&72smzq1r}1ZzRl}lex>0yDUg#EFNfF>l1fMi>0tRks}OAWq>r;!v8Y(Xe723J zEzp#5_rC8{o?F^?zKW-p3+14^#0hXbh;Lntq6!Arw`Z(^$XpT6QCFHRFU%$EwBgGw zyDiLdkc^|Mr`pbe1iaL_xmi)b zkEd=v2!SJ0s|BTD`|96|cUM_8NR|UvSZEQ-f?3f;0bsE~341(*bsdw&kVF@>%zw!z zgselOIY)@#b2tSfb{Xu;FxHehlu~7Kjq@ShVrC%eIyA-$E)p$Dq#`c7s4jIS&Kbh;q9B}j!IftsR1E+dX)kNi*(ip>47k9O3$Et0Syr3U9Enpj~!`{=KF3r zf<2;v!(|G-Nk=EQ0evy@U*!L8G(U-wrGlPZ`%5 zb#Od#a*|A2_TJ+itn${7G6Vhlj;f*$PCG_maHs0!~3<2>$54h`Wmv)A#R zTTx!8mZLg^gMnd|3tK@Bfp;I76HRZlx9<;x+m5(Hzj$ClpWjYjJ7}_|tTGbPgSN{w9xK}ED5<=*YH%=T9Gs0XKYH}3U=Y~(Cd3V;1g&@r==UBwFqbus zDI~e4K!-f`sAO&kmKI2u$B(19ObE1tPy*BllER2`wh2*^;VflyCxQXQ5wtMv=juwx zNX8Q!rKK}ThIh!72=sB}Z6$Ymz}9S^uD2$lA*lMEtD@LXmFt`alqD9VVZ#an=YTmF z!nMQz)6+p3n4;q`Z>mtp0mq$=uExyDLWuy*h)Wv&W_ak>spOX(WmdcfLw_@)jDZECLqv1r z3m-uVau$nz43(Ga0@z5Lz!@UnWQYVXr`>fo>EOA%BuC)1j#bV~z#l++=unOm9!n*4 zZeUV|NwueAy}+tdNBALmz8kQ+e~acbtgrapr84A^fFoP6nPoxc1=radQlY}cOAOJ? z&eN5szAVi*(KTftq@j00xyUy~MFpipd{m-B?oe6bhKoAxa#d!3LUxSq8y6HgmH+fB7}RSpQR0L0~1S)Nu3;qsNA9B5DCdT`w@Rp!%|FQWQ!L# zfbn(VZB(3zF@_I&3+_0zR1rc#YR1T0oR0b&l~!a`)nr2U-+}Lf4HC{^BM>ktf=mU? zRE+Dhxr?P%$J&wN>;={3gx`o}e;)BsMtUYrULqHORN(Byj^OK{7CajHVukk|Tdv)^6Q>$kSZ6vaZwB z8rvm~T(D=VKIvPxE@0mC($?{ zC|D&DLL@UPOszS9*PdueW%uM*ooHF#KOIk`p|FOQwWn$rn&O3!B_n_%c+i?7Y-$YP z7@*t+V?J>mRa&{lsW(w~KzqaCs`O54`d*1lH!SCdKawI-OM!EPiJv#DEQhUJi#k?K zTH@+KrOMT2>R}S}tvVQ}cB_Wgvd;4o1{UQ95s-P_#aGO_IxbHS_G%&c>FbHKpNv&| zQqHU0Y)R!D3>-oe`7G|N_?Mg61@j;~Oo#JCyR|Ewcbeff?J_p7Nt%MqqTIxopGUNd zgrTI~zCfg^fn#W6c;-+9%>)c=^c@vV(7-dkL)ERi1iF-X9_(bI2dLUnbQlvnA!O6# zTRvckrzvdoa6FJ}!Ii#!@ZCZ!Q^FJg_t;Ldpg?6`diX6v^&X;lj(v0 z8_MQrcQ-BxFN!REU=0_d+hH_QuMM(ex0|9!sDMdZtC99)x&g_mFS3tgC?Z6tp;_lt zt&))n)rwOOq@YBf9#QS;%`9^`?bvN8AmH;s*(ko|?ZD+M;DP5;^RK;G70ZZ|&;WA~ z>g3S4ZGjjsbM=(WPZ%sVCXjg@$};dsb^~P8I8|Z6o6xfScAAfG77(5(e*p+m2dGxW z57|};nR8G@82sReMQ?;qDXgZAj&X<(WE1AwijSTC3>!6aj>u6Vzv71OZ5HV*eP9-= z!q1Khh!&j28HJuny;mk#&t6FeA{{@Q>R=UNMylx^MFllqnb`_Elh}yprBI*tZ?;=s zX_gYLPl^o_KoU9;ZGwVYYFdv8K=JZQZk7t523a!Gq(_HlPbhl_KBs2}5_rARK#~ip z0S^bgcdu|~ryK)R6wueSp#ROU<>vnzWD9kgXQqs6)aB5zPrMC3X9TcW`f6&yNpi8o~kF z8E{>cUKXV8A)@lD0aga17pBLt1?)q3Ok_&RNJMkI#UYl;V?R74+5%gDCB<3MU~zXX zRfUgJzS7>i>*uGnH#he0k_unEHc96HytEHk8HxPA0KcgDpDDli-RWAzZ_*a$y~aPt zYH$BnW?d9tQF2mFX*1Do(VbJv*SGTFFB~2eWvV;*x7a*G=R|pRS}6NEr#@XWFh;&e z;zKD8zlMz%wjp!2gmcoSw+5Z1(pMvg?bo%JYLM)hk|uo}y9TyxLK$#sWVodAYR^cM zDpP5Z{aiv!*{ZX+xJiq_3XX90rKNkjO%8Dt@wuoe3l%?ac+DA$YQS(;akIn*)mXA4 z%Yw7g4EhexNv+KY{xC`Uv==Ve3e7@eX?XKEr%c-fT1E-CIIW+1vozr{%Wa&%VG+YxxRO9Fj+m0}?>2W;XrAqaj z>+0q_SE1)`#QUOlcRiW-DasG$@Q(2cj7Q`EsQv25cJP$Sg^B6rZNf*OENlj7S6(jHDk_Cs2KD3S!5ojxT* z0lkBzEGWj@c;%gZrW(FUsD#~J&9UN-ZrH_(U?QZI(E5WoSUDRNcq!=jvM&)7K!v}(m81WAe9S2u_ z0)S$JWurSl{zOYd=gFbR!lt97`(^iUP^|`-iri|>Nu4Ab`FJkqm^m>MLU{Dh(>?zX z)}0m0jtH`oEfO$5ddrZDYrjfPRb=8a<(H_;1$s;M$wnEa6nJ((w4QWm(0WH*&biI* z8QWRB%ie@D#|sZ9!f=k@x(3SOT}vb~fU6L5B770pr%U$lG(u~GbV5XG2|_-k-#`#V zklEIy*Oiq+af0uL7UdAZyZE5Td@iz%5E%&G3t*7H zrLAwv;_WaMvr>3J(c7itve?8BO{7&Es~AuC@`Oat;=70)WFR_>tzeI*h_WiJ%KR<7 zHduPNjsnV!3IFm1=gam?%8$?c)2H#QfnrhRpsb_iDuvFH{csUu!{Ql%KaYIY3ylEE zEQlUKve|QHx`s!tc`iJ zMGwkD1QJTQ=JT*&kq|a>%XDi3)M*^xrzy-hImA;YL{eNLWS|sOZi?5L`cK?LUKmxI z&~^&ffr|qeEefkaXm5C&WB;1bj!I;mybJOH`uBhm5$Y8C0jk9cof3^#2Bpl8cv2Wm zXiDKiIO`0g56R)&xP^klBG8J(&h;VV1B)GxEJQN|lhA|-kCMr|0x3d4z?}ikI}0u% zIjb?8{sWRg6rVg7aPorp#u~=6iE0y-D)0lThv>(-Zx*lv>My0)gIf?aJ>o>M2o8(M z2_d_th9zeNjZ>uKyxg8Puvpws%5ok9@YrI)j*ux?)h9ob&NHHEgV%Q*I~%%WHQ?m} z+o!KX>*Q-h#ZuJ*>D%mVaoM-V&|0V=BoYl-VScg>5Fa2cbox{EPexd%rLI}N>J_*- z2#?SJP0@$aJ+pg+mD0eU0tBKoge}4D?Ghd&k)Z^0S|-%EI}8#gQ1tyLKd@ze(gn7(F%{Ar9ezM44Ue zPx~`8nISSIKv7Wlso`8b)_^38QPZixtSyFwb&y_0IP4Tb3kw$&9u%oy);4d(_kRZ- z$&o2O$83XMGBXTStE53 zC!+7Z!5AAffAk(U*c6?iSGyIN=+sNLx_fQ3XRBYh(cPQ$@vR?6;R-ktzY@y;f-3jY z>$bEV2MhPeR4*eORf@Jj*z19CUOsC1@}RJhl{4DGMjtn6yVG*vAWTh9Baocx6yXOP zIp-5S)r5%e-9y`Y_5Lc;o$+pafGX2(K6|b4{*nld(Oe^4nLRkZtH>A16~FrB-MahJ z*(&LkN2#?f*lnOtgnVqpUTmXW66~NB%WusVwz^ow>OQ+@0+QoX@<&9fOx+Q!WK6?@ zs>LREiuGlaDG>Dr>2hvagJ3-lL`m9&6Q+n9xsGDc=hD;(H8{g_Twf!hBd;v3vskjg z-J~IM)`s1V@X3T$4VguO!6>YbNMMRm^xBm4c#U|o655-R6B{{;K2#QQ?ViHV%qWuaYmEDLbIbf}F-n`B#NOMN(hh@6>dhzyG1()$A+k!ye#`=d2JF)0QBELQm9IRB8B?(2+=df z5qC@$PzrjLAadh}>oS`fSdf86{RNu^31Mhk=@R*dXvQfsv94oAC+-R4kZ=z{MPefe zSR0tzL1pK-bF!FdgZP9{7zzP^89un+4TduED{SdnmWduX&erYN^F=d*a9G{ z1X~g+6OKJXd>{-%3lWkK-3Q~Dr>r@Yg8nR!SDd;e2T&H_E&`j(O`da72MJ;n8Xqh) z)FAvVGi>~9+b^^LIl9X;%4SMEf)ozx$AdDF$W_COM+D?n_;A!Qq^c?(;nV-FFr&Q1 z&NzT*&WS|B*hwYD4r)LWsP=DP=Y>QDvPf75_KKr}m*NRvc0)Npxrn<9k2;>sAZg<{ zB2*6_2hcU&DM8jg4BJhlxLzWWu5{ML=!(w|xEWSLqR{JZfCuaF-~%R6chTk2BXk zAmS?pO3{ z$3%lTkx%;Y;784$03uQPT5be6s52IAM>4WkmTqwU(gwoBY`Iq4HX?nj{nb@05yaZl;~qI z1!b(VM))QO;qMMBN-|`wsf@-U$nfa{+X#<2JQ!zl*jPG-6{tvmAH^3=r+oZ!R-BeZ zR#bpg47Rk5>(gW?!aKeGH|mpbw;&4H0*Zm+G*Bj7Rd^U`JHMRcc^7ea{#5wmo3(&e zC^NJzWpT4jbuxs7)QFFEoMvNL(0wVcCSuM67bqlftSZl-qT^gdif6|QG{UTSctqkm zys?tu++`E8Jo98^;2R+J<^-KEO@SZW!;eQ?B4%nT3k{(evPFMJb;16dkr_-w8r>iT zf*@qYt6yU?46-F#ImauNO*Cyq^>oC5HL90ZC><*D1u)dY1E84}!0q>@}a zpMY!}cBb%PK|mAYmr$Kichih;0Xwp+gi#D73lmR5g@?h^pNYOEX{P z#=%Qo23<~q3HTn>`kcF-803 z)~tH)?}#0*NXW7w>K}0(0WpZI0N`O)3ylHJ% zkWz$siFGI0KA~B&m)0~Jb{V1)>JBVOJRKs-r4lg}bgE*06Fw|DSqLdRpLsJ!%?yhm z#I(nwkg>R@gILvkvdBDQS-?WGK{6#biy1e|{p*c>93hcC+6pP%#zwq@515LRVpo*3 zH5eK7Es>+xuSp?7qi$tw-=dA{7JAUGiD066B-L3s_APbxwfl4J4*Zb8;0(wlL=+Ke zT|}QVN>xmv;4@pq3n>JQ%SQB;SSq&*VM}h*!w}_ zMB^-?>9I*AxjC^s#RnR7@)f?qrD5SKIzrw0;^d)axGn$al>yQnfSs7O*e2?er)fH(9X zoX0V3I{E?FX<>|@=OI~x&|Oe)p}|0szzV`w}UDSY}jG6$% z0x*TP1G>S{fVpYRQ~-iPX3pdo_Asc1>6~N_N~IzpUtS0T_Cu7SWY|R8!BQ-7ffMo- zc`$;RZQPsivFvGVq7KBeSc5W9(36uzXO3^%*4JjVj7caX&Z9;#N4UNS_JJfNK!MJ@ z%;}Ks8<&nVX#)R_n%YffVYFvAun=A>9S3_$a_Ru003bL5lA_Yp+ltB_pwhh4p3Pn8fiBn8HY&PgOF$%}BQOFQ|Z0E>DL;$Zp)(l1vRw7kGnSVWrKEU7y!KDlTR$V=u*15&^gf|9`AqAeubq2N=B ze^0)x2solK;al=E)|k3kkoi=3Xn8;?X@_~Lfw5>t5YNNBp}EP|!c&ZZs|I8SJ^|K) zILL$HE5mF#uVyv!Aj>90iXPN=7y(V@9|$4vC%fm8EhM*9Qg9+?CQeq6GB6{=9-d*P zYY|iL8c5$Lv7DKr?)_e705DqKE5{uVi*X|le;IXV5I|S6KC=W_$rSmDFbRrEQjFhl zgI#w9_0%PdEeFds zKzn(aYMJY!o5vg%k%l#uNGcvzotgGG+>2jm)gIh!sU zzD$Vv9ELVzcVH3~;{3zL9)>I&bj3?#B2~UroK|%7Ow17yhAb_X;!H*X9t@gvDWW6X zlvLs|I?`xzocP#(_#xN3SkPU1*&D_9QbH_hsYFF9BE`^-$S3R`4$01>S@Aq9c-l!0 zrm_O61wrM?Oc9=lFcWZjC7uKkwf4Zt zgk6Ab9D|A(Ni--NDkOKfi)&J>fY@ePU(8-W{|mChjs^*_19!&W&7~o3fwhtV-f$Gs zDHZd9g2M99vF`2+KyaaUERj$oUcF{*BMz1+M+1`F&{QL16;P>6Kqkn!a3=?RNk1_l z{n_*3sxmMSk*KU`b^)%85uAZE)KfU?Yi!7ronh=Q;%9PA4@+9x*13F)j1#by7olk+sEMxlC?2vjeS z`;Ig8Lp5W*74hwoEhFN05t9>oNMK8@vnOyVtjYzX2T+JkZ6ifUju`|_fV~RCGheU> zEb?>_ydyMn$`;@(n9j<>lWu)57q$(yQ?U#&hB{zK78xE~K3_%+%- z^3X`r%X>xq&CA9eE9%qWY3$Cx*41h=ULj$20+1YRe)p~x)<%*Y#F>iID$>U(Jp3sP zKAvOwu%3V`*jHCa?JwuQOJz-@1P=135J5E(W?u8v@Syv}`ZYs4LIAfQw3Y!2F++2? zK_fNLh8)Lcr?q;Yi=3DH?rcq}H;QAwLvH}wT=|^H?M}9Tz~84zZ38(5L`T9svMKC2 z)e-g&i}|sEhW(syMV1U28qB-_0*H|fk|_}YT}(3hzKEWX`l-7jUM(KX{gZ33_#7HC zZ5C5x$Ft%D9tEQvP#IMfOrYT1aBpHfnH*h33g=_efz5COn(3|)otkC<|XaKLR*gdi~y8xK!6uxK9(Dj8@Y@*x5Wy^*ONU$a( zySNWXk!uS1=~8A)4_dpr1>q&CE3$^fLAfTe1MDw&HNix4_5;v3Fu@_aIg(F1BeMp^ z5uI9^0_%V+S{*(|M$jl(hSZ!scchG|D8FryX` z7dHq%f^Wu&h#CQw!=4_g&NZgP_%Z-jU|Kuf<=1cBThNgNGS@V47h@jASO%obg2@eh zeZdX#`8$o(;2N2((#9p&vBg;hWKCAZA#Jco@DU&s$`0spnsjaq$&wSATt})f!-JC~ zqwCJ_1hC%OSp?hY5Sdp(b{CC#6wmjvya!Z5S4{T=w#Q63wdrZ5!Kp}-Mmvo-8}bQT zFQpJ`f#Voq9X5gzoxq?-_UG*U0Ft&u$%jzRN|qr4XEG>7OaVwJ1u6$L*1OE&F&8Xi zIYbdg*HO&-{_*(?z)}$6VoX-7E-d8^#jwv98ZkAF{Rqy_g-P-9gN5N}>#N%Yl-hff$2~KOij(5-_hXV&$sY2jy9$a#(aK&JkQg zv@ub1?ExJ?sLHJcDn}p@jL^=k7XEPZGG)(Rq=?cn1T07#XAA&7mkz}n(s6pW1ggO< zMoM3n?u5n5Vk*&fJ(X0z8eE^T1k#Lff(Z*at(e($+dH}f_d)`FY|SjrZ3;`ZBjSq_ z+>&$j3-r*gwKVVo>sOk)n-l46cd{(J3uH8k`^AK3j5+3s^Txj-h6N}MVuZ{_#sd)H zK^sDYWTYpSqMK7q#0#MKU?QHwkWj}6W;{qvXkZ;8uMqi9HdDP~EC3s#L8=U|V3M=# zPLO}#Wrc7B-*(0w=pf@pfs1LQAwvAu_bllz(*cDKz}x_Oh$FmezQz<1z6IK{vSyM@ zW7jY)gq@uCzO>SOWl2^l=W>~faesv7Z3uS@+`dCvW<^ZMj32r*_{0IQZ1Elg6g;O7pj znb*M!+-5L^Z({vz^t`Ws+p~dOB=f)&=rz@3xvynb zHBSrc<&62BoG$Y@=#|J>Y!X%;@?G>GjG?xMjNn`o>mC%fJ)H@oz^<8{+-HknTNmOY zF`^QX3a0=kEDQ=vFKB8%Sly*QggEvabQ>bNK1Fm-h7z;B37iK6KFXrWRT0t40Bizq zKxaTH#^(V|g5fCbuv2^42)Z1&7UGLBYbGOnI30?3Mn)5=&2Em8fGz@h3N<%!4xXYJ z0V#x+@PSe6QXzRmjlUC!sfflB%6Fs?#6lqb!eA4T=$YFo-Jt09o5<^-;l^F#H42Uw z*g#T7sDlAvJp9_gZE3Ht?h0n37$*ds0y0O3HnULp+No^CuaSqF9BjLjH7uv&Y`a-Y zwt>CCq=erO@~aYSe{;K=Q2dB>&$*N_rFUZYDT}Q%aCbUccCrNj6@DCDjhxh(99TP z3WrLk*k7Pkbw=Ma(_RJQn~gGSiRjqz@Key5usAP=`6w%!WP*fuJ7#jj#Bfxi_yf)axiCa|x*mYfbnhzOyMGjx4~rE2zgiH6#5xBZl0 zH$qy7tU>!mBJ9L&58>+s>p(UXB@s7#1&Io=w6?F&93^3`CdQn{r$c)x<_6@1?6~Bw z0Gp!JpASSyWvkiEBS&SMU(#O-|Byd-;%i6>Tx%9d0|KIxbQBE$dn5OhY@QH53HLal z^axy}z7U0kKjiw2*VO4ZjHJTR& zY#!nwgOX*|@RayPsv_5%^IrtdEr$1ChAsl=TBHJBy}a~KCcQmllvW%6JBcu$FNH1_ z%x)wMX{QhV58PE*mpCk08jk%YQGzy&Bf~rh&+n@agFK9pSJ1yHm8V%#bY#CznF;`9 z-~D;^`|x^I{`_~l5wa7a4$;JMu*=0;nBFGTZJL3ABZ7p9%U9v5Z6tiH1Qq4XSqCqG z+hk?e-27T+!zKzvx8mhSIP+gWk7IO-(*X(v!Ow)rhgZs@38YneYf9N2&Bge{a9J1f zD3`SnXsxC44C*NHQMR=IhC+saft--cG*rD2LOO0C24VYk5!Rtn;hc)zx9_Yr)m*{F znZj0T~IVhRU5b1o^Hj>zSb@voR%R7~I$N-?yyq7^LttVX%S-S$Q(Z z#vGz2a(03KYYt-u*HXO`Q=pE3zECmw?bt7|jFOVX0UCNM)j!s>O0s%!vOOuqkPWJ3 z#y}#dgpuPEt-}Rj^8(mQ4sV19%VJfA6DA@LVuDW@dW(#4 zRfj=@bIr+17P>_-Pg6^3J+P98ncnBU>F zS_Iu-0^3#~UJ$!EO-z*BLpLVk)Q4k|XcC4zQzp>{3^9hjQFAb}QOlEx`WQb9W;YQw zM(PnA0DcGEIqqr91z9e$Xdi5kaX4v?*M+t>cllH?zC+8~W$&xO7BOxyMS(oMq`Hnm zFg(1`nH%6yBv9D~pgiylt>lQ#U7&@~yCimDmjIDQw8Pc4Q9*Lr(XkP)&ihleB3E!d zrv4u2lZu;xzlzi}FCAB^*02z?pji#@r!1&2pl`_I5gJPINHMR0lt;{-To3jSP|BQI3UIZc_Qmln zjBK+J3Ub;M&=K5(i2rd`1Ac)z29UeZv;0quF4|`V8%kQD_vt9O=s0`gNs?b^k+o)4 zN#q3yM;vPWjeyR%Z!=U(6y!qrh>g(WNt7FeSMXb88PZa^s9O*1VZl*p$UglgxB|#; zCFW-EL?#2i2rV)xOjxHQ&4h#oUK1$<$c;eo02SA=yh6K_WRx`^iNru7_UF#kvpI}Z zEut`T=C&Y|@$l3Nge3x^0kTHrO&EO&49{T;mA_vis;FlbQ&0QU8$G8j^=<#^pI!GZ3%WmAJ6aK_mi;IFWpXWTCYpaU zGcy0mbJOp4f29Aa_5II}F8-hMi|e-iQZ>0l5Q7?iuKqLc&&R)13=SvD6UDH+#Vywq z0;gtSOB3MJN#qH~ldwAiO7d?R>lxMM_!z#L)8DJgt6cnlRC^qTWffg*QHhP$|` ztprPGpwMF%x|_++iq2?H9?(+FtaZE(k^_Duw?pvr#rd9+d^^q${Wql7$}cBNrxW92 zYBUTK^%GOuCqlObASVSkNEE5|wwg9u&!lKd^`^wcRSq$n*mTPSH0W4jSK6#_m&Kyq zvWCh$YUj%nvkNW&6-QHL@AWs^c~cIZF9$anv#Get85)FocMv9qc};?vx8JB1|LeDA zt9#eWJ3dp`tmwtBzr6KY(yNuy>gVGT4!zC*TFIAQ*)AlA!|@pQjuno7HQ{h(YX!(w zgwqSdjCU8%?t42Jhd+Lwgu1M>%5s<>!5j=uf-~^2pYnN7v)KI%gNa!Dip0G!Sn9`9^Qg zzR6p0@_?)1>+lkCh|v*?wIzm-trH@~*la@{BXD9iSDuIn)X@>V2x#_PFl`{ht;~e; zfvFI(UPm{nJl$|NRR4hLHcf6@7WQX5kk@Lyt;o84OX0INB>Yf-QS<^N57{haMBV;3 z$I?irg!eHzP+lX)8@(Oh-g96&6j}r1t5k!XNlO5aJCnfeVcp%9gA#Qy^U4eyAV&KbZ0Su zOLLRWP`(Ka6KX5J+6MDV*POyrY^=%XD7kZu6DSdxW=E)~Ifs1nmY5$5KT0+o6F!-A z1Xs@5ac}LKJ0-fU6hiUN6p5$16hfo z!8i=`uf|o2InHph^Xy1_y^a1rHDuGTQYR|C~0!|HBT*-pKJt zswb%z3^d9pWk!IgeCD7ygia^Z2qv3Zh;Un8L$DtDN5}Hv`Z4*DBL@m0rg>>g5NF;J zvBCoj0>e;E6dOz+Ix)_7T0)?j?q_rNkZ9{2^u92uSP((dF(A ziG-#KnT40XaD4Tb!*+HH-#W7a!ITxcT12Mg)C<1|Cop`ehrC%N9dlaKKSfdJ7=+no zfQnJpP>y3(r9k9^V*17nLxi>9lfCYUQjY4DB7s%KHw;@#;o+7jK@pfF8gUh1#aJOk zh#Osx&=lj008bDWBy&Q7@$wmzlBa7)*@~fPWRtmu;Gz+xFH=qy%gkF)^64m?a+Y)`ax(V349}E8ZnA~xdRTx zW~asyj4z})Gy-i_X%3F0XaCgIkL_yuHS0cgRa}Zukp%hkm{~l-^iy3~bhEl%yA##f zyrwZw(;gsA)5pAhcKS7F&RJ$k;iM<`Cs1d^1fX4kQk;?ii0j!6x;^eGH5C)4NIFL` zVWIlP?MvX1yCBz>5H2!X(TtKfkzqlYmC3T6{QLS}Ed4#b^37!xtV7jnwzj`z={3rE zLsKn9Vp2R4Cn?f5FYaxPuMRf12dDOOgW#abT6iZfr5e+1_S>YCl&P7ZEG!i`QH@Ot zSOq~7p>04M@VHex-U*)I!sts97AU0#B zT>N@k&V5%JedcsKOeEBhAF&sqS*KPeBowW>uSYe2Ig&R}3<^ZCBv_H0Z*V7S9~CUt zCi+;IM>UN0UG;T6kX0-j9*LS0@M^cz3Old7nb&4*<&}`zumWaboO;oPo6!Qi1v45( zsw^G1>9NF;;Mhh3%65+<)`ur$P`GlsNqfOkDVf2j9bl?3j^pYTaFejSp<4s^7p%;d z23iA8P2hnhus9_2B%P6L{`4LAm4j{TgLWCeVCMexIbWx02 z^yXH=R(BUOB3n7$5t0PbE$&G)9xhQ- zlAbpvM0`x<%c&L4#aLrJ()1!gB{~ZXPNgz0tI`%5O4(klUw?JMNK`A2uco;K$0`%b z!jOB%n^eOI%PgL5ImxT8#vnp2P=l#V+-)&Kcvi^F+J@*wqHr@TuN%xmC-aSaXyUV! z$reV$WDZ9hO(JkdTPBxY^NPj{*CBQoQunMAhfYe{1>KdZg^4l<*oo2|7DK)rmbY!N zQ24}@WP37}NEj2?XE?wpVFODPq{LlaKCq!Bz};A-@LxF%LJQ{fakwM@Aw4yUcFMfR zL^!6|(&WHiVGkA{eQw>?t=Abu<%c=-)BUqZ;Gj~L&Hk@ka>CEjlJtmipTFch#Z@iIS|7z2AO|_Zy zz6I*fJ!MIY0svycV%9glI=FJYXlQ?#W+l9G?qU>=UZ)D;!4bncQL_te38b&kyYq<9 z2F6Gu8I;Ceyxudvau@s+v^i2%pa^2>hxT1^mRueQP(*&^egizW(MHaKiUKbBehk!bv0hrNcqFD&$cG@|Rbq^_ohp#D&?#oW5oEi`X;~iOsPcrk zx#(Fa;hDK|G&`ahpU?vHPuyF$(ra2VX2NrQkkH&#r2Xh4U|KOGehP?+08cR5Ts76@ z(JHRDK*lh8kQAuKz*IviXt+mQEZHZ-g^7VoD{`-US&2-H`UWd5E}hV@MMj5F#qBKu zLjn%!dd?AsDwf%Cg#FFdy0a2-g>b2YkHZ-tla_IaLpsX{>tTNc&@oI`%RQZmEvY)> z8DpAFZc=wr^wZNLwj%l%_9u->3KbTD;qgLcKeS(A+bswI>WHjE@<~Y|`hS0ZfB}$b zbupRPGs*fQM-S)&xec%|z7qlEi$_RMQY4Fly6|l9cnM4xgyo<8%twFXpMB(Sz4#Bl zxUoGtfBu9C|{k@%^eQLPVdunIz)y?gV zAN=3$#_;^cr7Ii5-srvm=KQnIK7an1k3W6>^7h_^;m3!Uw>O6yU;X;IU;NDnFTwBr zGQawz&;RyQ8&dDW?xl@O=fxYIf99!Pe{XB!;_#^#FAOhU{p?vi{uF|$8#U_ za(v>$U5@8|yqDv_nt#Ybdgl>+{L}IL2fZBZ#2@BifBwgNIUc!%KEc!RiAP)yAKlokv#D`rA zKSHs8b`hW8E$VXev83p~N2 z{K($qVOQiKkMh}%J;9^=i67xEJmgV6d(S6$lz)VV=dkq9$&A&(|5k_D?Gt{cx2~!_GQnm z$wQvzvmbkc`|!Bd^X$dW-t!?(^VyF*!F_n5-t&F0;uGA5Pe0h&-}ew)+D!1v(+7W>Zb{=&JRc@K^t$UDd@LF4?; z<+sj0+1VF($isW~W500jzkJW@|Hj^x&Eepw&y25pes6F4VQ(Q$pO0)k><<6ncg071 z_R{%hE}nlzdgiGYH!lo6^PXqrKZ~_(ZBlVVgJJB?&j_le(Gn={o}v- xx4r(xcynX&@xS(8|K7Qu{u?jOFZ>th{>k6?S09Iw+~{xaZ0rs%Y=874|39$P;;jGx literal 0 HcmV?d00001