From e551c0774f9a9d99bd6692b5205cee1501fe5f51 Mon Sep 17 00:00:00 2001 From: natke Date: Tue, 23 Jul 2024 17:32:01 -0700 Subject: [PATCH 1/2] Update build from source instructions --- docs/genai/howto/build-from-source.md | 157 +++++++++----------------- 1 file changed, 51 insertions(+), 106 deletions(-) diff --git a/docs/genai/howto/build-from-source.md b/docs/genai/howto/build-from-source.md index 012d8ea2fd048..7ea423fbc71ac 100644 --- a/docs/genai/howto/build-from-source.md +++ b/docs/genai/howto/build-from-source.md @@ -25,11 +25,10 @@ git clone https://github.com/microsoft/onnxruntime-genai cd onnxruntime-genai ``` -## Install ONNX Runtime +## Download ONNX Runtime binaries By default, the onnxruntime-genai build expects to find the ONNX Runtime include and binaries in a folder called `ort` in the root directory of onnxruntime-genai. You can put the ONNX Runtime files in a different location and specify this location to the onnxruntime-genai build via the --ort_home command line argument. -### Option 1: Install from release These instructions assume you are in the `onnxruntime-genai` folder. @@ -38,9 +37,9 @@ These instructions assume you are in the `onnxruntime-genai` folder. These instruction use `win-x64`. Replace this if you are using a different architecture. ```bash -curl -L https://github.com/microsoft/onnxruntime/releases/download/v1.18.0/onnxruntime-win-x64-1.18.0.zip -o onnxruntime-win-x64-1.18.0.zip -tar xvf onnxruntime-win-x64-1.18.0.zip -move onnxruntime-win-x64-1.18.0 ort +curl -L https://github.com/microsoft/onnxruntime/releases/download/v1.18.1/onnxruntime-win-x64-1.18.1.zip -o onnxruntime-win-x64-1.18.1.zip +tar xvf onnxruntime-win-x64-1.18.1.zip +move onnxruntime-win-x64-1.18.1 ort ``` #### Linux and Mac @@ -48,151 +47,87 @@ move onnxruntime-win-x64-1.18.0 ort These instruction use `linux-x64-gpu`. Replace this if you are using a different architecture. ```bash -curl -L https://github.com/microsoft/onnxruntime/releases/download/v1.18.0/onnxruntime-linux-x64-gpu-1.18.0.tgz -o onnxruntime-linux-x64-gpu-1.18.0.tgz -tar xvzf onnxruntime-linux-x64-gpu-1.18.0.tgz -mv onnxruntime-linux-x64-gpu-1.18.0 ort +curl -L https://github.com/microsoft/onnxruntime/releases/download/v1.18.1/onnxruntime-linux-x64-gpu-1.18.1.tgz -o onnxruntime-linux-x64-gpu-1.18.1.tgz +tar xvzf onnxruntime-linux-x64-gpu-1.18.1.tgz +mv onnxruntime-linux-x64-gpu-1.18.1 ort ``` -### Option 2: Install from nightly +#### Android + +If you do not already have an `ort` folder, create one. -Download the nightly nuget package `Microsoft.ML.OnnxRuntime` from: https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly. - -Extract the nuget package. - ```bash -tar xvf Microsoft.ML.OnnxRuntime.1.18.0-dev-20240322-0323-ca825cb6e6.nupkg -``` - -Copy the include and lib files into `ort`. - -On Windows - -Example is given for `win-x64`. Change this to your architecture if different. - -```cmd -copy build\native\include\onnxruntime_c_api.h ort\include -copy runtimes\win-x64\native\*.dll ort\lib +mkdir ort ``` -On Linux - -Example is given for `linux-x64`. Change this to your architecture if different. - -```cmd -cp build/native/include/onnxruntime_c_api.h ort/include -cp build/linux-x64/native/libonnxruntime*.so* ort/lib -``` - -### Option 3: Build from source - -#### Clone the onnxruntime repo - ```bash +curl -L https://repo1.maven.org/maven2/com/microsoft/onnxruntime/onnxruntime-android/1.18.0/onnxruntime-android-1.18.0.aar -o ort/onnxruntime-android-1.18.0.aar +cd ort +tar xvf onnxruntime-android-1.18.0.aar cd .. -git clone https://github.com/microsoft/onnxruntime.git -cd onnxruntime ``` -#### Build ONNX Runtime for CPU on Windows - -```bash -build.bat --build_shared_lib --skip_tests --parallel --config Release -copy include\onnxruntime\core\session\onnxruntime_c_api.h ..\onnxruntime-genai\ort\include -copy build\Windows\Release\Release\*.dll ..\onnxruntime-genai\ort\lib -copy build\Windows\Release\Release\onnxruntime.lib ..\onnxruntime-genai\ort\lib -``` +## Build the generate() API -#### Build ONNX Runtime for DirectML on Windows +This step assumes that you are in the root of the onnxruntime-genai repo, and you have followed the previous steps to copy the onnxruntime headers and binaries into the folder specified by , which defaults to `onnxruntime-genai/ort`. -```bash -build.bat --build_shared_lib --skip_tests --parallel --use_dml --config Release -copy include\onnxruntime\core\session\onnxruntime_c_api.h ..\onnxruntime-genai\ort\include -copy include\onnxruntime\core\providers\dml\dml_provider_factory.h ..\onnxruntime-genai\ort\include -copy build\Windows\Release\Release\*.dll ..\onnxruntime-genai\ort\lib -copy build\Windows\Release\Release\onnxruntime.lib ..\onnxruntime-genai\ort\lib -``` +All of the build commands below have a `--config` argument, which takes the following options: +- `Release` builds release binaries +- `Debug` build binaries with debug symbols +- `RelWithDebInfo` builds release binaries with debug info +### Build Python API -#### Build ONNX Runtime for CUDA on Windows +#### Windows CPU build ```bash -build.bat --build_shared_lib --skip_tests --parallel --use_cuda --config Release -copy include\onnxruntime\core\session\onnxruntime_c_api.h ..\onnxruntime-genai\ort\include -copy include\onnxruntime\core\providers\cuda\*.h ..\onnxruntime-genai\ort\include -copy build\Windows\Release\Release\*.dll ..\onnxruntime-genai\ort\lib -copy build\Windows\Release\Release\onnxruntime.lib ..\onnxruntime-genai\ort\lib +python build.py --config `Release` ``` -#### Build ONNX Runtime on Linux +#### Windows DirectML build ```bash -./build.sh --build_shared_lib --skip_tests --parallel [--use_cuda] --config Release -cp include/onnxruntime/core/session/onnxruntime_c_api.h ../onnxruntime-genai/ort/include -cp build/Linux/Release/libonnxruntime*.so* ../onnxruntime-genai/ort/lib +python build.py --use_dml --config `Release` ``` -You may need to provide extra command line options for building with CUDA on Linux. An example full command is as follows. +#### Linux build ```bash -./build.sh --parallel --build_shared_lib --use_cuda --cuda_version 11.8 --cuda_home /usr/local/cuda-11.8 --cudnn_home /usr/lib/x86_64-linux-gnu/ --config Release --build_wheel --skip_tests --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES="80" --cmake_extra_defines CMAKE_CUDA_COMPILER=/usr/local/cuda-11.8/bin/nvcc +python build.py --config `Release` ``` -Replace the values given above for different versions and locations of CUDA. - -#### Build ONNX Runtime on Mac +#### Linux CUDA build ```bash -./build.sh --build_shared_lib --skip_tests --parallel --config Release -cp include/onnxruntime/core/session/onnxruntime_c_api.h ../onnxruntime-genai/ort/include -cp build/MacOS/Release/libonnxruntime*.dylib* ../onnxruntime-genai/ort/lib +python build.py --use_cuda --config `Release` ``` -## Build the generate() API - -This step assumes that you are in the root of the onnxruntime-genai repo, and you have followed the previos steps to copy the onnxruntime headers and binaries into the folder specified by , which defaults to `onnxruntime-genai/ort`. +#### Mac build ```bash -cd ../onnxruntime-genai +python build.py --config `Release` ``` -### Build Python API - -#### Build for Windows CPU - -```bash -python build.py -``` - -#### Build for Windows DirectML +### Build Java API ```bash -python build.py --use_dml +python build.py --build_java --config Release ``` -#### Build on Linux - -```bash -python build.py -``` +### Build for Android -#### Build on Linux with CUDA +If building on Windows, install the `ninja` packages. ```bash -python build.py --use_cuda +pip install ninja ``` -#### Build on Mac +Run the build script. ```bash -python build.py +python build.py --build_java --android --android_home --android_ndk_path ` --android_abi --config Release ``` -### Build Java API - -```bash -python build.py --build_java --config Release -``` -Change config to Debug for debug builds. ## Install the library into your application @@ -203,12 +138,22 @@ cd build/wheel pip install *.whl ``` -### Install .jar + +### Install JAR Copy `build/Windows/Release/src/java/build/libs/*.jar` into your application. -### Install Nuget package +### Install AAR + +Copy `build/Android/Release/src/java/build/android/outputs/aar/onnxruntime-genai-release.aar` into your application. + +### Install NuGet + +_Coming soon_ ### Install C/C++ header file and library -_Coming soon_ +_COming soon_ + + + From e167f1ab429a16f626b010799df7778cdfc55103 Mon Sep 17 00:00:00 2001 From: natke Date: Fri, 16 Aug 2024 16:57:02 -0700 Subject: [PATCH 2/2] Add C/C++ install section --- docs/genai/howto/build-from-source.md | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/docs/genai/howto/build-from-source.md b/docs/genai/howto/build-from-source.md index 7ea423fbc71ac..bbbb5d1c89b5e 100644 --- a/docs/genai/howto/build-from-source.md +++ b/docs/genai/howto/build-from-source.md @@ -16,7 +16,7 @@ nav_order: 2 ## Pre-requisites - `cmake` -- `.Net v6` (if building C#) +- `.NET6` (if building C#) ## Clone the onnxruntime-genai repo @@ -116,7 +116,7 @@ python build.py --build_java --config Release ### Build for Android -If building on Windows, install the `ninja` packages. +If building on Windows, install `ninja`. ```bash pip install ninja @@ -128,7 +128,6 @@ Run the build script. python build.py --build_java --android --android_home --android_ndk_path ` --android_abi --config Release ``` - ## Install the library into your application ### Install Python wheel @@ -138,6 +137,9 @@ cd build/wheel pip install *.whl ``` +### Install NuGet + +_Coming soon_ ### Install JAR @@ -147,13 +149,16 @@ Copy `build/Windows/Release/src/java/build/libs/*.jar` into your application. Copy `build/Android/Release/src/java/build/android/outputs/aar/onnxruntime-genai-release.aar` into your application. -### Install NuGet - -_Coming soon_ ### Install C/C++ header file and library -_COming soon_ +#### Windows + +Use the header in `src\ort_genai.h` and the libraries in `build\Windows\Release` + +#### Linux + +Use the header in `src/ort_genai.h` and the libraries in `build/Linux/Release`