-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experimental: convert onnx to linalg #1891
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, change is looking good! I'm excited to see this come together!
src/Compiler/CompilerPasses.cpp
Outdated
// Linalg bufferization can be before or after LowerToKrnlPass | ||
// However, sice bufferization::AllocTensorOp is in LowerToKrnl temporarily, | ||
// these passes have to be called before LowerToKrnlPass | ||
pm.addNestedPass<func::FuncOp>(createLinalgBufferizePass()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using OneShotBufferization
is the recommended way to do bufferization now rather than calling individual bufferize passes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tried the OneShotBufferization and got the following error:
loc("run-matmul/matmul-dyn.mlir":4:8): error: op was not bufferized
// -----// IR Dump After OneShotBufferize Failed (one-shot-bufferize) //----- //
Could you tell me how to use the OneShotBufferization? I add the pass before lowering to Krnl with this line of code:
pm.addPass(mlir::bufferization::createOneShotBufferizePass());
pm.addPass(onnx_mlir::createLowerToKrnlPass(optLevel, enableParallel)); | ||
// An additional pass of canonicalization is helpful because lowering | ||
// from ONNX dialect to Standard dialect exposes additional canonicalization | ||
// opportunities. | ||
|
||
// For Linalg and Krnl mixed IR: | ||
// Canonicalization pass will clean up bufferization::to_tensor and to_memref |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We needed to add a new pass to accomplish eliminate type conversions, and looking at your IR output it seems you still have bufferization::to_memref showing up.
Strictly speaking, I don't know if such a pass should be necessary, and in your case it may be because you aren't doing one shot bufferization, but if that doesn't clean things up, you might want to look at adding a pass for cleaning up type conversions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this particular test case, the to_tensor/to_memref are cleaned up by canonicalization pass. Since ONNX has data types other than tensor, such as Sequence and Map type, I think we may have to convert to_tensor/to_memref to UnrealizedConversionCastOp so that they can be cancelled out by canonicalization. The cast Op can handle any type.
SmallVector<Value, 1> outputs; | ||
outputs.emplace_back(outV); | ||
auto newOp = | ||
rewriter.create<linalg::MatmulOp>(loc, outputType, operands, outputs); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may be aware, but linalg::MatMulOp only works for 2D MatMul, and you will need to call different MatMul ops depending on tensor shape.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. I need to check the shape of tensor for linalg.matmul. Other cases can be either lowered to Krnl, or linalg.generic.
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
This PR is intended to experiment how to convert onnx ops to Linalg ops, mixed with krnl ops.
Only onnx.MatMulOp is selected to be converted to Linalg.MatmulOp. Currently, the conversion has to be enabled with command flag
enableLinalg
.Since the Linalg uses tensor dialect and bufferization dialect, the change also includes some passes to lower these dialect, as well as Linalg dialect.
The status of this PR: missing the lowering of bufferization.alloc_tensor.
For experiment, I added a lowering for bufferization.alloc_tensor to memref.alloc. My simple test case with mixed Linalg and Krnl lowering can be compiled now.