Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert MIGX IR to Linalg #3640

Open
causten opened this issue Nov 19, 2024 · 1 comment
Open

Convert MIGX IR to Linalg #3640

causten opened this issue Nov 19, 2024 · 1 comment
Assignees

Comments

@causten
Copy link
Collaborator

causten commented Nov 19, 2024

Now that the MIGX IR can be extracted in to files PR (#3550) , the next step is to convert the IR to Linalg. The rocMLIR team has a tool that should be able to do that already. Use the tool and generate linalg for each model's winning MLIR config

@richagadgil
Copy link
Contributor

richagadgil commented Nov 23, 2024

MLIR IR to Linalg Conversion

This guide provides step-by-step instructions for converting MLIR IR to Linalg using RocMLIR and LLVM.

# Step 1: Clone RocMLIR Repository
git clone https://github.com/ROCm/rocMLIR.git
cd rocMLIR

# Step 2: Install `ninja-build`
apt install ninja-build

# Step 3: Build RocMLIR
cmake -G Ninja .. \
  -DCMAKE_BUILD_TYPE=RelWithDebInfo \
  -DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang \
  -DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++
ninja check-rocmlir

# Step 4: Clone LLVM Repository
RUN echo 'install llvm ${LLVM_VERSION}' && \
    wget --no-verbose https://apt.llvm.org/llvm.sh && \
    chmod +x llvm.sh && \
    ./llvm.sh ${LLVM_VERSION} && \
    apt-get update && \
    apt-get install -y clang-${LLVM_VERSION} clang-format-${LLVM_VERSION} clang-tidy-${LLVM_VERSION} lld-${LLVM_VERSION} && \
    ln -s /usr/bin/clang-${LLVM_VERSION} /usr/bin/clang && \
    ln -s /usr/bin/clang++-${LLVM_VERSION} /usr/bin/clang++ && \
    ln -s /usr/bin/clang-tidy-${LLVM_VERSION} /usr/bin/clang-tidy && \
    ln -s /usr/bin/clang-tidy-diff-${LLVM_VERSION}.py /usr/bin/clang-tidy-diff && \
    ln -s /usr/bin/clang-format-${LLVM_VERSION} /usr/bin/clang-format && \
    ln -s /usr/bin/git-clang-format-${LLVM_VERSION} /usr/bin/git-clang-format && \
    ln -s /usr/bin/clang-format-diff-${LLVM_VERSION} /usr/bin/clang-format-diff && \
    ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/lld && \
    ln -s /usr/bin/lldb-${LLVM_VERSION} /usr/bin/lldb && \
    ln -s /usr/bin/ld.lld-${LLVM_VERSION} /usr/bin/ld.lld && \
    ln -s /usr/bin/llvm-profdata-${LLVM_VERSION} /usr/bin/llvm-profdata && \
    ln -s /usr/bin/llvm-cov-${LLVM_VERSION} /usr/bin/llvm-cov && \
    ln -s /usr/bin/llvm-symbolizer-${LLVM_VERSION} /usr/bin/llvm-symbolizer && \
    ln -s /usr/bin/llvm-cxxfilt-${LLVM_VERSION} /usr/bin/llvm-cxxfilt && \
    clang --version

# Step 5: Modify `TosaToLinalgPass.cpp`
# Navigate to `external/llvm-project/mlir/lib/Conversion/TosaToLinalg/TosaToLinalgPass.cpp`
# and delete the following line:
# validationOptions.profile = tosa::TosaProfileEnum::BaseInference;

# Step 6: Build `mlir-opt` from Source
git clone https://github.com/llvm/llvm-project.git
mkdir llvm-project/build
cd llvm-project/build

cmake -G Ninja ../llvm \
  -DLLVM_ENABLE_PROJECTS=mlir \
  -DLLVM_BUILD_EXAMPLES=ON \
  -DLLVM_TARGETS_TO_BUILD="Native;NVPTX;AMDGPU" \
  -DCMAKE_BUILD_TYPE=Release \
  -DLLVM_ENABLE_ASSERTIONS=ON

cmake --build . --target check-mlir

# Step 7: Run the Conversion Script with .mlir file
./build/bin/rocmlir-driver \
  --kernel-pipeline=migraphx \
  bert_base_cased_1.onnx/2.11.0.2f9757/fill1_input_ids_input-dim_\@input_ids_1_32/fp32/mlir/mlir_dot_add_1x32x768_1x768x2304.mlir | \
  ./build/bin/rocmlir-opt \
  --rocmlir-custom-tosa-to-linalg | \
  ./external/llvm-project/build/bin/mlir-opt \
  --tosa-to-linalg-pipeline \
  --tosa-to-tensor \
  --tosa-to-scf \
  --tosa-to-arith \
  --linalg-fuse-elementwise-ops \
  --linalg-fold-unit-extent-dims \
  --canonicalize \
  --convert-tensor-to-linalg \
  --cse

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants