Skip to content

shreya-um/transformers-getting-started

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Transformers getting started

Basics: Transformer Neural Networks: A Step-by-Step Breakdown

Install all python dependencies (tested on python 3.11)

  1. clone the repo, go the folder

  2. Create a python venv python3.11 -m venv .transformersEnv

  3. source the venv source .transformersEnv/bin/activate

  4. install dependencies pip install -r requirements.txt

  5. run convert script python bert-convert.py or python bert_pretrained.py
    bert-convert.py script creates a onnx folder with Bert onnx model bert-base-cased.onnx with precision (float32)
    bert_pretrained.py script creates a onnx_models folder with Bert onnx model i.e., bert-base-cased-squad.onnx
    Use Netron to verify and visualise onnx model

onnx to MLIR (LLVM dialect)

Use onnx-mlir to convert onnx models to LLVM dialect.

  1. Build on local or for quick conversions use Docker img Instructions
  2. Follow Environment Variables Setup
  3. To generate LLVM Dialectonnx-mlir -O3 --EmitLLVMIR <onnx model file>
  4. (optional) creating executable binary example

LLVM Dialect to LLVM IR

  1. Build MLIR on local (preffered debug version) MLIR getting started
  2. (Optional) set path variable to mlir project bin folder
  3. use mlir-translate <input-file-LLVM-dialect> --mlir-to-llvmir -o <output.ll> to get LLVM IR.
Other useful docs
  1. Transformer Model Optimization (onnx runtime)

  2. Pytorch to onnx export

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages