-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify the operand data type constraints of operation #283
Comments
There are other operations should constrain the operand to floating-point types, e.g., batchNormalization, elu, hardSigmoid, hardSwish, instanceNormalization, leakyRelu, linear, sigmoid, softplus, softsign, tanh. |
For some element-wise unary ops, Additionally, |
(Feedback raised by @wacky6 from Chromium CL review) For reduction ops, |
That's consistent with DML's data type support too:
|
Just curious, what's the expected behavior for integer overflows for MULTIPLY and SUM? I guess float overflow should just become |
I recall surveying a number of libraries a while back for integers, and they all did two's complement wrap rather than saturate. That is what DML does. I presume XNNPack too? e.g. import numpy
x = numpy.array(200, dtype=numpy.uint8)
y = numpy.add(x, x)
print("value:", y)
print("shape:", y.shape)
# Prints:
# value: 144
# shape: ()
👍 |
[WIP] Summary of the operand data type constraints for current WebNN operations.argMin/argMaxbatchNormalization
clampconcatconv2dconvTranspose2dElement-wise binary operationsadd
sub
mul
div
min
min
pow
Element-wise unary operationsabsceilcosexpfloorlognegsintaneluexpandgathergelugemmgru
gruCell
hardSigmoidhardSwishinstanceNormalizationlayerNormalizationleakyRelulinearlstm
lstmCell
matmulpadPooling operationsaveragePool2dl2Pool2dmaxPool2dpreluReduction operationsreduceL1reduceL2reduceLogSumreduceLogSumExpreduceMaxreduceMeanreduceMinreduceProductreduceSumreduceSumSquarereluresample2dreshapesigmoidslicesoftmaxsoftplussoftsignsplittransposetriangularwhere |
@fdwr , if I read the DML doc correctly, L1, SUM_SQUARE, MULTIPLY, SUM reduce functions only support 32 and 64 bit integers. https://learn.microsoft.com/en-us/windows/win32/api/directml/ns-directml-dml_reduce_operator_desc |
@huningxin : Correct.
Is this the intersection of DML with XNNPack? (because FL3+ DML ABS supports int16 too, and FL4.1+ supports int8/int16/int32/int64). |
@huningxin in the list above, when an op is listed like this: batchNormalization Should that "same as input" statement be interpreted as: I assume the latter (based on other examples) but wanted to confirm. |
@inexorabletash Yep, latter. If |
I have a local change for this. Lots of copy/paste. I'm wondering if we want to be table-driven; I'll bring that up in the eventual PR. The table is missing:
|
Should mixed precision be allowed when the op involves accumulation? For example, would this be acceptable for conv2d (likewise, matmul, reduceSum, ...)? conv2d(
/* input */ fp16,
/* weight */ fp16,
/* bias */ fp16,
) => fp32 I think requiring input, weight and bias to be the same type is reasonable (caller shouldn't add/multiple matrices of different types. We don't need to block on mixed precision topics (because it's relaxing the constraints). A well documented type constraint table will be a big spec improvement. 😄 My read of DML doc is that Does DML use a fp32 accumulator internally, then casts the result to fp16? Or is it fp16 accumulation all the way (might saturate the range and yield Infinity / NaN) ? |
@wacky6 You are correct - DML almost always requires the input and output data types to be the same (some notable exceptions are cast and quantize/dequantize operators).
It depends on the op (reduction needs more intermediate precision than simple addition) and which flags are passed (e.g.
I generally agree, as mixing any multiple different input types would explode the test matrix and increase backend complexity. |
Added the following ops into the table.
+1
+1
floating-point only
floating-point only
floating-point only
floating-point only
argmin/argmax's output is int64
gather's indices is uint32 or int64
+1 |
Hi,
If WebNN declares wider type set, I think we would need some way to feature detect(#463). |
Introduce constraints for input operands, either directly (e.g. input's dataType can only be "float32" or "float16") or indirectly (e.g. weight's dataType must be the same as input's). Fixes webmachinelearning#283
Introduce constraints for input operands, either directly (e.g. input's dataType can only be "float32" or "float16") or indirectly (e.g. weight's dataType must be the same as input's). Fixes webmachinelearning#283
* Specify the operand data type constraints of operations Introduce constraints for input operands, either directly (e.g. input's dataType can only be "float32" or "float16") or indirectly (e.g. weight's dataType must be the same as input's). Fixes #283 * gruCell: bundle hiddenState in with other type validations * Identity should accept all types * Add reduceMean restriction * Update gemm to check c data type too --------- Co-authored-by: Dwayne Robinson <[email protected]>
@philloooo The
Agreed. Differences across implementations will be inevitable (similarly, WebGPU doesn't support all GPUTextureFormats, like interface MLContext {
Promise<MLComputeResult> compute(MLGraph graph, MLNamedArrayBufferViews inputs, MLNamedArrayBufferViews outputs);
+ boolean isTypeSupported(MLOperandDataType dataType);
}; |
oops @fdwr sorry that was a typo, it's fp16 not fp64. Just edited. |
I know it's bad form to comment on a closed issue, but... the table lists this for ReLU and PReLU:
@philloooo points out that most other activations support only floats, and #283 (comment) says that prelu should only accept floats? @huningxin - is this intentional? |
Yes. The intention is having them to accept the signed values.
In Chromium CL review, we'd like to prototype |
Added missing |
`reduceL1`, `reduceProduct`, `reduceSum` and `reduceSumSquare` support already 32-bit integers. 64-bit integers should also be supported. Fix webmachinelearning#283, webmachinelearning#694
`reduceL1`, `reduceProduct`, `reduceSum` and `reduceSumSquare` already support 32-bit integers. 64-bit integers should also be supported. Fix webmachinelearning#283, webmachinelearning#694
The current spec doesn't specify the operand type constraints of an operation. However, some operations, e.g.,
softmax
should only support float32 operand type according to the survey of frameworks and native ML APIs in the following table. The lack of the operand type constraints specification would lead to implementation issue, such as Chromium CL 3856752.Thanks @wacky6 for pointing this out.
The text was updated successfully, but these errors were encountered: