Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allowing ADO packing pipelines to use the specific ORT commit #340

Merged
merged 35 commits into from
May 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
2a0a696
Creating the install
jchen351 Apr 28, 2024
ea52206
ort_genai.h
jchen351 Apr 28, 2024
95f9067
revert ort_genai.h
jchen351 Apr 28, 2024
d812091
revert ort_genai.h
jchen351 Apr 28, 2024
050f01f
Merge branch 'refs/heads/main' into Cjian/ado-zip
jchen351 Apr 28, 2024
55c0b69
Merge branch 'refs/heads/main' into Cjian/ado-zip
jchen351 Apr 29, 2024
64ce581
update build
jchen351 Apr 29, 2024
75d0179
update build
jchen351 Apr 29, 2024
a411b14
update build
jchen351 Apr 29, 2024
07ae10a
update build
jchen351 Apr 29, 2024
26ee50b
arch: ${{ parameters.arch }}
jchen351 Apr 29, 2024
8c6dabf
Nuget
jchen351 Apr 30, 2024
a455b80
${{
jchen351 Apr 30, 2024
1ce3d63
archiveType
jchen351 Apr 30, 2024
0a831c3
Merge branch 'refs/heads/main' into Cjian/ado-zip
jchen351 Apr 30, 2024
7d36871
try zip instead of tgz
jchen351 May 2, 2024
cf6ab4b
try zip instead of tgz
jchen351 May 2, 2024
1fb0d6c
try zip instead of tgz
jchen351 May 2, 2024
65528e1
Gcc
jchen351 May 2, 2024
653f189
Merge branch 'refs/heads/main' into Cjian/ado-zip
jchen351 May 2, 2024
e8355c2
codeql
jchen351 May 2, 2024
27e706f
Merge branch 'refs/heads/main' into Cjian/codeql
jchen351 May 3, 2024
72ab9b8
Merge branch 'refs/heads/main' into Cjian/ado-zip
jchen351 May 3, 2024
fd677d0
latest
jchen351 May 3, 2024
3b743cf
Merge branch 'refs/heads/Cjian/codeql' into Cjian/ado-zip
jchen351 May 3, 2024
d0bc45e
# TODO: Find out why do we need to to have libonnxruntime.so.$ort_ver…
jchen351 May 3, 2024
38156a6
# TODO: Find out why do we need to to have libonnxruntime.so.$ort_ver…
jchen351 May 3, 2024
3823a06
default: '1.18.0-dev-20240426-1256-b842effa29'
jchen351 May 3, 2024
80473fd
ort_cuda_version: ${{ parameters.ort_cuda_version }}
jchen351 May 3, 2024
5185faf
- task: CopyFiles@2
jchen351 May 3, 2024
8f5ff33
Remove pipeline resource
jchen351 May 3, 2024
7f49386
remove download-ort-build.yml
jchen351 May 3, 2024
8ff8202
onnxruntime$(ep)
jchen351 May 3, 2024
3c7d4f2
remove download-ort-build.yml
jchen351 May 3, 2024
89a2bcf
Merge remote-tracking branch 'refs/remotes/origin/main' into Cjian/ad…
jchen351 May 3, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/linux-cpu-x64-build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ jobs:
continue-on-error: true

# TODO: Find out why do we need to to have libonnxruntime.so.$ort_version
- name: Extra OnnxRuntime library and header files
- name: Extract OnnxRuntime library and header files
run: |
mkdir -p ort/lib
mv ${{ env.ORT_PACKAGE_NAME }}/build/native/include ort/
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/linux-gpu-x64-build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ jobs:
continue-on-error: true

# TODO: Find out why do we need to to have libonnxruntime.so.$ort_version
- name: Extra OnnxRuntime library and header files
- name: Extract OnnxRuntime library and header files
run: |
mkdir -p ort/lib
mv ${{ env.ORT_PACKAGE_NAME }}/buildTransitive/native/include ort/
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/mac-cpu-arm64-build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ jobs:
run: |
nuget install ${{ env.ORT_PACKAGE_NAME }} -version ${{ env.ORT_NIGHTLY_VERSION }} -x

- name: Extra OnnxRuntime library and header files
- name: Extract OnnxRuntime library and header files
run: |
mkdir -p ort/lib
mv ${{ env.ORT_PACKAGE_NAME }}/build/native/include ort/
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/win-cpu-x64-build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ jobs:
- run: Get-ChildItem ${{ env.ORT_PACKAGE_NAME }} -Recurse
continue-on-error: true

- name: Extra OnnxRuntime library and header files
- name: Extract OnnxRuntime library and header files
run: |
mkdir ort/lib
move ${{ env.ORT_PACKAGE_NAME }}/build/native/include ort/
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/win-cuda-x64-build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ jobs:
- run: Get-ChildItem ${{ env.ORT_PACKAGE_NAME }} -Recurse
continue-on-error: true

- name: Extra OnnxRuntime library and header files
- name: Extract OnnxRuntime library and header files
run: |
mkdir ort/lib
move ${{ env.ORT_PACKAGE_NAME }}/buildTransitive/native/include ort/
Expand Down
19 changes: 9 additions & 10 deletions .pipelines/nuget-publishing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,17 @@ parameters:
- name: ort_version
displayName: 'OnnxRuntime version'
type: string
default: '1.17.3'
default: '1.18.0-dev-20240426-1256-b842effa29'

- name: ort_cuda_version
jchen351 marked this conversation as resolved.
Show resolved Hide resolved
displayName: 'OnnxRuntime GPU version'
type: string
default: '1.18.0-dev-20240426-0614-b842effa29'

- name: ort_dml_version
displayName: 'OnnxRuntime DirectML version'
displayName: 'OnnxRuntime DML version'
type: string
default: '1.18.0-dev-20240423-0527-c07b8d545d'
default: '1.18.0-dev-20240426-0116-b842effa29'

- name: cuda_version
displayName: 'CUDA version'
Expand All @@ -42,11 +47,6 @@ parameters:
- '12.2'
default: '11.8'

- name: publish_to_ado_feed
displayName: 'Publish to Azure DevOps Feed'
type: boolean
default: false

resources:
repositories:
- repository: manylinux
Expand All @@ -65,7 +65,6 @@ stages:
enable_linux_cuda: ${{ parameters.enable_linux_cuda }}
enable_win_dml: ${{ parameters.enable_win_dml }}
ort_version: ${{ parameters.ort_version }}
ort_cuda_version: ${{ parameters.ort_cuda_version }}
ort_dml_version: ${{ parameters.ort_dml_version }}
cuda_version: ${{ parameters.cuda_version }}
publish_to_ado_feed: ${{ parameters.publish_to_ado_feed }}

21 changes: 11 additions & 10 deletions .pipelines/pypl-publishing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ parameters:
- name: enable_win_cuda
displayName: 'Whether Windows CUDA package is built.'
type: boolean
default : true
default: true

- name: enable_win_dml
displayName: 'Whether Windows DirectML package is built.'
Expand All @@ -27,12 +27,17 @@ parameters:
- name: ort_version
displayName: 'OnnxRuntime version'
type: string
default: '1.17.3'
default: '1.18.0-dev-20240426-1256-b842effa29'

- name: ort_cuda_version
displayName: 'OnnxRuntime GPU version'
type: string
default: '1.18.0-dev-20240426-0614-b842effa29'

- name: ort_dml_version
displayName: 'OnnxRuntime DirectML version'
displayName: 'OnnxRuntime DML version'
type: string
default: '1.18.0-dev-20240423-0527-c07b8d545d'
default: '1.18.0-dev-20240426-0116-b842effa29'

- name: cuda_version
displayName: 'CUDA version'
Expand All @@ -42,11 +47,6 @@ parameters:
- '11.8'
- '12.2'

- name: publish_to_ado_feed
displayName: 'Whether to publish the packages to ADO feed.'
type: boolean
default: false

resources:
repositories:
- repository: manylinux
Expand All @@ -65,6 +65,7 @@ stages:
enable_win_cuda: ${{ parameters.enable_win_cuda }}
enable_win_dml: ${{ parameters.enable_win_dml }}
ort_version: ${{ parameters.ort_version }}
ort_cuda_version: ${{ parameters.ort_cuda_version }}
ort_dml_version: ${{ parameters.ort_dml_version }}
cuda_version: ${{ parameters.cuda_version }}
publish_to_ado_feed: ${{ parameters.publish_to_ado_feed }}

24 changes: 14 additions & 10 deletions .pipelines/stages/jobs/nuget-packaging-job.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,6 @@ parameters:
values:
- 'linux'
- 'win'
- name: publish_to_ado_feed
type: boolean

jobs:
- job: nuget_${{ parameters.os }}_${{ parameters.ep }}_${{ parameters.arch }}_packaging
Expand Down Expand Up @@ -44,18 +42,21 @@ jobs:
value: ${{ parameters.ort_version }}
- name: GDN_CODESIGN_TARGETDIRECTORY
value: '$(Build.ArtifactStagingDirectory)/nuget'
- name: os
value: ${{ parameters.os }}
- name: ort_filename
${{ if eq(parameters.ep, 'cpu') }}:
value: 'onnxruntime-${{ parameters.os }}-${{ parameters.arch }}-${{ parameters.ort_version }}'
value: 'Microsoft.ML.OnnxRuntime'
${{ elseif eq(parameters.ep, 'cuda') }}:
${{if eq(parameters.cuda_version, '11.8') }}:
value: 'onnxruntime-${{ parameters.os }}-${{ parameters.arch }}-gpu-${{ parameters.ort_version }}'
${{if eq(parameters.os, 'win') }}:
value: 'Microsoft.ML.OnnxRuntime.Gpu.Windows'
${{ else }}:
value: 'onnxruntime-${{ parameters.os }}-${{ parameters.arch }}-cuda12-${{ parameters.ort_version }}'
value: 'Microsoft.ML.OnnxRuntime.Gpu.Linux'
${{ elseif eq(parameters.ep, 'directml')}}:
value: 'Microsoft.ML.OnnxRuntime.DirectML.${{ parameters.ort_version }}'
value: 'Microsoft.ML.OnnxRuntime.DirectML'
${{ else }}:
value: 'onnxruntime-${{ parameters.os }}-${{ parameters.arch }}-${{ parameters.ep}}-${{ parameters.ort_version }}'
value: 'Microsoft.ML.OnnxRuntime'

- name: genai_nuget_ext
${{ if eq(parameters.ep, 'cpu') }}:
value: ''
Expand Down Expand Up @@ -84,15 +85,18 @@ jobs:
- template: steps/capi-linux-step.yml
parameters:
target: 'onnxruntime-genai'
arch: ${{ parameters.arch }}
ep: ${{ parameters.ep }}

# TODO: Add a step to build the linux nuget package

- ${{ if eq(parameters.os, 'win') }}:
- template: steps/capi-win-step.yml
parameters:
target: 'onnxruntime-genai'
arch: ${{ parameters.arch }}
ep: ${{ parameters.ep }}
- template: steps/nuget-win-step.yml
- ${{ if eq(parameters.publish_to_ado_feed, true)}}:
- template: steps/nuget-ado-feed-releasing-step.yml

- template: steps/compliant-and-cleanup-step.yml

24 changes: 13 additions & 11 deletions .pipelines/stages/jobs/py-packaging-job.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,7 @@ parameters:
values:
- 'linux'
- 'win'
- name: publish_to_ado_feed
type: boolean


jobs:
- job: python_${{ parameters.os }}_${{ parameters.ep }}_${{ parameters.arch }}_packaging
Expand Down Expand Up @@ -67,18 +66,21 @@ jobs:
value: ${{ parameters.ep }}
- name: ort_version
value: ${{ parameters.ort_version }}
- name: os
value: ${{ parameters.os }}

- name: ort_filename
${{ if eq(parameters.ep, 'cpu') }}:
value: 'onnxruntime-${{ parameters.os }}-${{ parameters.arch }}-${{ parameters.ort_version }}'
value: 'Microsoft.ML.OnnxRuntime'
${{ elseif eq(parameters.ep, 'cuda') }}:
${{if eq(parameters.cuda_version, '11.8') }}:
value: 'onnxruntime-${{ parameters.os }}-${{ parameters.arch }}-gpu-${{ parameters.ort_version }}'
${{if eq(parameters.os, 'win') }}:
value: 'Microsoft.ML.OnnxRuntime.Gpu.Windows'
${{ else }}:
value: 'onnxruntime-${{ parameters.os }}-${{ parameters.arch }}-cuda12-${{ parameters.ort_version }}'
value: 'Microsoft.ML.OnnxRuntime.Gpu.Linux'
${{ elseif eq(parameters.ep, 'directml')}}:
value: 'Microsoft.ML.OnnxRuntime.DirectML.${{ parameters.ort_version }}'
value: 'Microsoft.ML.OnnxRuntime.DirectML'
${{ else }}:
value: 'onnxruntime-${{ parameters.os }}-${{ parameters.arch }}-${{ parameters.ep}}-${{ parameters.ort_version }}'
value: 'Microsoft.ML.OnnxRuntime'

- name: dml_dir
value: 'Microsoft.AI.DirectML.1.14.1'
Expand Down Expand Up @@ -107,16 +109,16 @@ jobs:
- template: steps/capi-linux-step.yml
parameters:
target: 'python'
arch: ${{ parameters.arch }}
ep: ${{ parameters.ep }}

# Windows job needs to set the python version and install the required packages
- ${{ if eq(parameters.os, 'win') }}:
- template: steps/capi-win-step.yml
parameters:
target: 'python'
arch: ${{ parameters.arch }}
ep: ${{ parameters.ep }}

- ${{ if eq(parameters.publish_to_ado_feed, true)}}:
- template: steps/py-ado-feed-releasing-step.yml

- template: steps/compliant-and-cleanup-step.yml

28 changes: 23 additions & 5 deletions .pipelines/stages/jobs/steps/capi-linux-step.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
parameters:
- name: target
type: string
- name: ep
type: string
default: 'cpu'
- name: arch
type: string
default: 'x64'
steps:

- checkout: self
Expand All @@ -26,6 +32,7 @@ steps:
echo "arch=$(arch)"
echo "ep=$(ep)"
displayName: 'Print Parameters'

- template: utils/download-ort.yml
parameters:
archiveType: 'tgz'
Expand All @@ -40,7 +47,7 @@ steps:
--container-registry onnxruntimebuildcache \
--manylinux-src manylinux \
--multiple_repos \
--repository onnxruntime$(ep)build$(arch)
--repository ortgenai$(ep)build$(arch)
displayName: 'Get Docker Image'
workingDirectory: '$(Build.Repository.LocalPath)'

Expand All @@ -50,14 +57,15 @@ steps:
docker run \
--rm \
--volume $(Build.Repository.LocalPath):/ort_genai_src \
-w /ort_genai_src/ onnxruntime$(ep)build$(arch) \
-w /ort_genai_src/ ortgenai$(ep)build$(arch) \
bash -c " \
/usr/bin/cmake --preset linux_gcc_$(ep)_release \
-DENABLE_TESTS=OFF && \
/usr/bin/cmake --build --preset linux_gcc_$(ep)_release \
--target onnxruntime-genai"
displayName: 'Build GenAi'
workingDirectory: '$(Build.Repository.LocalPath)'

- task: BinSkim@4
displayName: 'Run BinSkim'
inputs:
Expand All @@ -66,20 +74,30 @@ steps:
- template: utils/capi-archive.yml
parameters:
archiveType: tar
- script: |
set -e -x
docker run \
--rm \
--volume $(Build.Repository.LocalPath):/ort_genai_src \
-w /ort_genai_src/ ortgenai$(ep)build$(arch) \
bash -c " \
/usr/bin/cmake --build --preset linux_gcc_$(ep)_release --target package"
displayName: 'Package C/C++ API'
workingDirectory: '$(Build.Repository.LocalPath)'

- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact: ONNXRuntime Genai capi'
inputs:
ArtifactName: $(artifactName)-capi
PathtoPublish: '$(Build.ArtifactStagingDirectory)/capi'
PathtoPublish: '$(Build.Repository.LocalPath)/build/$(ep)/package'

- ${{ if eq(parameters.target, 'python') }}:
- bash: |
set -e -x
docker run \
--rm \
--volume $(Build.Repository.LocalPath):/ort_genai_src \
-w /ort_genai_src/ onnxruntime$(ep)build$(arch) \
-w /ort_genai_src/ ortgenai$(ep)build$(arch) \
bash -c " \
/usr/bin/cmake --preset linux_gcc_$(ep)_release \
-DENABLE_TESTS=OFF \
Expand All @@ -101,7 +119,7 @@ steps:
docker run \
--rm \
--volume $(Build.Repository.LocalPath):/ort_genai_src \
-w /ort_genai_src/ onnxruntime$(ep)build$(arch) \
-w /ort_genai_src/ ortgenai$(ep)build$(arch) \
bash -c " \
/usr/bin/cmake --build --preset linux_gcc_$(ep)_release \
-DENABLE_TESTS=OFF \
Expand Down
14 changes: 10 additions & 4 deletions .pipelines/stages/jobs/steps/capi-win-step.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ parameters:
- name: ep
type: string
default: 'cpu'
- name: arch
type: string
default: 'x64'
steps:
- bash: |
echo "##[error]Error: ep and arch are not set"
Expand All @@ -31,6 +34,8 @@ steps:
echo "cuda_version=$(cuda_version)"
echo "target=${{ parameters.target }}"
displayName: 'Print Parameters'


- template: utils/download-ort.yml
parameters:
archiveType: 'zip'
Expand Down Expand Up @@ -83,15 +88,16 @@ steps:
AnalyzeTargetGlob: '$(Build.Repository.LocalPath)\**\*genai.dll'
continueOnError: true

- template: utils/capi-archive.yml
parameters:
archiveType: zip
- powershell: |
cmake --build --preset windows_$(arch)_$(ep)_release --target package
displayName: 'Package C/C++ API'
workingDirectory: '$(Build.Repository.LocalPath)'

- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact: ONNXRuntime Genai capi'
inputs:
ArtifactName: $(artifactName)-capi
PathtoPublish: '$(Build.ArtifactStagingDirectory)/capi'
PathtoPublish: '$(Build.Repository.LocalPath)\build\$(ep)\package'

- ${{ if eq(parameters.target, 'python') }}:
- task: BinSkim@4
Expand Down
Loading
Loading