Skip to content

Commit

Permalink
More formatting
Browse files Browse the repository at this point in the history
  • Loading branch information
natke committed Apr 2, 2024
1 parent 7024bb6 commit cc137b9
Showing 1 changed file with 96 additions and 101 deletions.
197 changes: 96 additions & 101 deletions docs/genai/howto/build-from-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,144 +17,139 @@ nav_order: 2

`cmake`

## Build steps
## Clone the onnxruntime-genai repo

1. Clone this repo
```bash
git clone https://github.com/microsoft/onnxruntime-genai
cd onnxruntime-genai
```

```bash
git clone https://github.com/microsoft/onnxruntime-genai
cd onnxruntime-genai
```
## Install ONNX Runtime

2. Install ONNX Runtime
By default, the onnxruntime-genai build expects to find the ONNX Runtime include and binaries in a folder called `ort` in the root directory of onnxruntime-genai. You can put the ONNX Runtime files in a different location and specify this location to the onnxruntime-genai build. These instructions use ORT_HOME as the location.

By default, the onnxruntime-genai build expects to find the ONNX Runtime include and binaries in a folder called `ort` in the root directory of onnxruntime-genai. You can put the ONNX Runtime files in a different location and specify this location to the onnxruntime-genai build. These instructions use ORT_HOME as the location.
### Option 1: Install from release

* Install from release
These instructions are for the Linux GPU build of ONNX Runtime. Replace `linux-gpu` with your target of choice.

These instructions are for the Linux GPU build of ONNX Runtime. Replace `linux-gpu` with your target of choice.
```bash
cd <ORT_HOME>
curl -L https://github.com/microsoft/onnxruntime/releases/download/v1.17.0/onnxruntime-linux-x64-gpu-1.17.1.tgz
tar xvzf onnxruntime-linux-x64-gpu-1.17.1.tgz
mv onnxruntime-linux-x64-gpu-1.17.1/include .
mv onnxruntime-linux-x64-gpu-1.17.1/lib .
```

```bash
cd <ORT_HOME>
curl -L https://github.com/microsoft/onnxruntime/releases/download/v1.17.0/onnxruntime-linux-x64-gpu-1.17.1.tgz
tar xvzf onnxruntime-linux-x64-gpu-1.17.1.tgz
mv onnxruntime-linux-x64-gpu-1.17.1/include .
mv onnxruntime-linux-x64-gpu-1.17.1/lib .
```
### Install from nightly

* Install from nightly

Download the nightly nuget package `Microsoft.ML.OnnxRuntime` from: https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly.
Download the nightly nuget package `Microsoft.ML.OnnxRuntime` from: https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly.

Extract the nuget package.
Extract the nuget package.

```bash
tar xvf Microsoft.ML.OnnxRuntime.1.18.0-dev-20240322-0323-ca825cb6e6.nupkg
```
```bash
tar xvf Microsoft.ML.OnnxRuntime.1.18.0-dev-20240322-0323-ca825cb6e6.nupkg
```

Copy the include and lib files into $ORT_HOME.
Copy the include and lib files into <ORT_HOME>.

On Windows
On Windows

Example is given for `win-x64`. Change this to your architecture if different.
Example is given for `win-x64`. Change this to your architecture if different.

```cmd
copy build\native\include\onnxruntime_c_api.h $ORT_HOME\include
copy runtimes\win-x64\native\*.dll $ORT_HOME\lib
```
```cmd
copy build\native\include\onnxruntime_c_api.h <ORT_HOME>\include
copy runtimes\win-x64\native\*.dll <ORT_HOME>\lib
```

On Linux
On Linux

```cmd
cp build/native/include/onnxruntime_c_api.h $ORT_HOME/include
cp build/linux-x64/native/libonnxruntime*.so* $ORT_HOME/lib
```
```cmd
cp build/native/include/onnxruntime_c_api.h <ORT_HOME>/include
cp build/linux-x64/native/libonnxruntime*.so* <ORT_HOME>/lib
```
* Or build from source

```
git clone https://github.com/microsoft/onnxruntime.git
cd onnxruntime
```
### Build from source

Create include and lib folders in the ORT_HOME directory
```
git clone https://github.com/microsoft/onnxruntime.git
cd onnxruntime
```

```bash
mkdir <ORT HOME>/include
mkdir <ORT_HOME>/lib
```
Create include and lib folders in the ORT_HOME directory

Build from source and copy the include and libraries into ORT_HOME
```bash
mkdir <ORT HOME>/include
mkdir <ORT_HOME>/lib
```

On Windows
Build from source and copy the include and libraries into ORT_HOME

```cmd
build.bat --build_shared_lib --skip_tests --parallel [--use_cuda]
copy include\onnxruntime\core\session\onnxruntime_c_api.h <ORT_HOME>\include
copy build\Windows\Debug\Debug\*.dll <ORT_HOME>\lib
```
On Windows

On Linux
```cmd
build.bat --build_shared_lib --skip_tests --parallel [--use_cuda]
copy include\onnxruntime\core\session\onnxruntime_c_api.h <ORT_HOME>\include
copy build\Windows\Debug\Debug\*.dll <ORT_HOME>\lib
```

```cmd
./build.sh --build_shared_lib --skip_tests --parallel [--use_cuda]
cp include/onnxruntime/core/session/onnxruntime_c_api.h <ORT_HOME>/include
cp build/Linux/RelWithDebInfo/libonnxruntime*.so* <ORT_HOME>/lib
```
On Linux

3. Build onnxruntime-genai
```cmd
./build.sh --build_shared_lib --skip_tests --parallel [--use_cuda]
cp include/onnxruntime/core/session/onnxruntime_c_api.h <ORT_HOME>/include
cp build/Linux/RelWithDebInfo/libonnxruntime*.so* <ORT_HOME>/lib
```

* Build for CPU
## Build onnxruntime-genai

```bash
cd ..
python build.py
```
### Build for CPU

* Build for CUDA
```bash
cd ..
python build.py
```

These instructions assume you already have CUDA installed.
### Build for CUDA

```bash
cd ..
python build.py --cuda_home <path to cuda home>
```
These instructions assume you already have CUDA installed.

* Build for DirectML
```bash
cd ..
python build.py --cuda_home <path to cuda home>
```

Two extra files are required for the DirectML build of onnxruntime-genai:
### Build for DirectML

* dml_provider_factory.h
* DirectML.dll
Two extra files are required for the DirectML build of onnxruntime-genai `dml_provider_factory.h` and `DirectML.dll`.

```cmd
cd <ORT_HOME>
curl -L https://github.com/microsoft/onnxruntime/releases/download/v1.17.1/Microsoft.ML.OnnxRuntime.DirectML.1.17.1.zip > Microsoft.ML.OnnxRuntime.DirectML.1.17.1.zip
mkdir Microsoft.ML.OnnxRuntime.DirectML.1.17.1
tar xvf Microsoft.ML.OnnxRuntime.DirectML.1.17.1.zip -C Microsoft.ML.OnnxRuntime.DirectML.1.17.1
copy Microsoft.ML.OnnxRuntime.DirectML.1.17.1\build\native\include\dml_provider_factory.h include
curl -L https://www.nuget.org/api/v2/package/Microsoft.AI.DirectML/1.13.1 > Microsoft.AI.DirectML.1.13.1.nupkg
mkdir Microsoft.AI.DirectML.1.13.1
tar xvf Microsoft.AI.DirectML.1.13.1.nupkg -C Microsoft.AI.DirectML.1.13.1
copy Microsoft.AI.DirectML.1.13.1\bin\x64-win\DirectML.dll lib
```

After the extra files have been copied into <ORT HOME>, build onnxruntime-genai as follows:

```bash
python build.py --use_dml
```
```cmd
cd <ORT_HOME>
curl -L https://github.com/microsoft/onnxruntime/releases/download/v1.17.1/Microsoft.ML.OnnxRuntime.DirectML.1.17.1.zip > Microsoft.ML.OnnxRuntime.DirectML.1.17.1.zip
mkdir Microsoft.ML.OnnxRuntime.DirectML.1.17.1
tar xvf Microsoft.ML.OnnxRuntime.DirectML.1.17.1.zip -C Microsoft.ML.OnnxRuntime.DirectML.1.17.1
copy Microsoft.ML.OnnxRuntime.DirectML.1.17.1\build\native\include\dml_provider_factory.h include
curl -L https://www.nuget.org/api/v2/package/Microsoft.AI.DirectML/1.13.1 > Microsoft.AI.DirectML.1.13.1.nupkg
mkdir Microsoft.AI.DirectML.1.13.1
tar xvf Microsoft.AI.DirectML.1.13.1.nupkg -C Microsoft.AI.DirectML.1.13.1
copy Microsoft.AI.DirectML.1.13.1\bin\x64-win\DirectML.dll lib
```

After the extra files have been copied into <ORT HOME>, build onnxruntime-genai as follows:

```bash
python build.py --use_dml
```


4. Install the library into your application
### Install the library into your application

* Install Python wheel
#### Install Python wheel

```bash
cd build/wheel
pip install *.whl
```
```bash
cd build/wheel
pip install *.whl
```

* Install Nuget package
#### Install Nuget package

* Install C/C++ header file and library
#### Install C/C++ header file and library

0 comments on commit cc137b9

Please sign in to comment.