Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split Onnxruntime Nuget GPU package #18819

Merged
merged 46 commits into from
Dec 22, 2023
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
de152ed
use flexdownload
mszhanyi Dec 5, 2023
e0f8f6c
add dependency
mszhanyi Dec 7, 2023
e3806e2
mv CUDA runtime
mszhanyi Dec 8, 2023
d3e907f
runtime
mszhanyi Dec 11, 2023
8f2fa45
add new msbuild task
mszhanyi Dec 11, 2023
2e180d5
temp disable package validation
mszhanyi Dec 11, 2023
8f199e7
typo
mszhanyi Dec 11, 2023
2a69f77
readd dotnet6.0
mszhanyi Dec 12, 2023
a260de8
netcoreapp3.1
mszhanyi Dec 12, 2023
250464b
Merge branch 'main' of https://github.com/microsoft/onnxruntime into …
mszhanyi Dec 14, 2023
b17c277
validate package
mszhanyi Dec 14, 2023
2549187
validate package1
mszhanyi Dec 14, 2023
42fe4bc
update
mszhanyi Dec 14, 2023
315d1c7
update
mszhanyi Dec 14, 2023
37ce9ca
fix
mszhanyi Dec 14, 2023
d059354
add build_dir
mszhanyi Dec 14, 2023
d47c136
Merge branch 'main' of https://github.com/microsoft/onnxruntime into …
mszhanyi Dec 14, 2023
93b2490
rm useless change
mszhanyi Dec 15, 2023
332fce2
check gpu_dependent
mszhanyi Dec 15, 2023
168c8b7
lint runner
mszhanyi Dec 15, 2023
6deb08a
update
mszhanyi Dec 15, 2023
b1607b3
flex download
mszhanyi Dec 15, 2023
05565cb
update
mszhanyi Dec 15, 2023
ce99728
update props.xml
mszhanyi Dec 15, 2023
87cf991
update Onnxrunitme.CSharp.proj
mszhanyi Dec 16, 2023
79d3b00
Merge branch 'main' of https://github.com/microsoft/onnxruntime into …
mszhanyi Dec 16, 2023
4ef8654
rename as comments
mszhanyi Dec 19, 2023
8e31c6c
rename 2
mszhanyi Dec 19, 2023
7e9071c
headers
mszhanyi Dec 19, 2023
4a5c10f
lint
mszhanyi Dec 19, 2023
1879bd5
generate sub package in one proj file
mszhanyi Dec 19, 2023
9205851
some refactor
mszhanyi Dec 19, 2023
aebb1d4
refact csproj
mszhanyi Dec 20, 2023
9204bae
make GPU.windows and GPU.linux can work
mszhanyi Dec 20, 2023
a4d2886
minor fix
mszhanyi Dec 20, 2023
a3da35b
update validate
mszhanyi Dec 20, 2023
0c277f8
fix
mszhanyi Dec 20, 2023
2fbde0e
typo
mszhanyi Dec 21, 2023
11834ba
add tests for new Gpu.Windows and Gpu.Linux
mszhanyi Dec 21, 2023
857c62a
update
mszhanyi Dec 21, 2023
c1d02b3
update description
mszhanyi Dec 21, 2023
1618cf2
run gpu test with new package
mszhanyi Dec 21, 2023
78c0c55
Update tools/nuget/generate_nuspec_for_native_nuget.py
mszhanyi Dec 21, 2023
84e24c9
wording
mszhanyi Dec 22, 2023
b7d49e5
Merge branch 'zhanyi/nugetpackage' of https://github.com/microsoft/on…
mszhanyi Dec 22, 2023
6af5361
fix lint
mszhanyi Dec 22, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -710,35 +710,46 @@ stages:
steps:
- checkout: self
submodules: true
- task: DownloadPipelineArtifact@2
displayName: 'Download Pipeline Artifact - NuGet'
inputs:
artifactName: 'onnxruntime-win-x64-cuda'
targetPath: '$(Build.BinariesDirectory)/nuget-artifact'

- task: DownloadPipelineArtifact@2
displayName: 'Download Pipeline Artifact - NuGet'
inputs:
artifactName: 'onnxruntime-win-x64-tensorrt'
targetPath: '$(Build.BinariesDirectory)/nuget-artifact'
- template: templates/flex-downloadPipelineArtifact.yml
parameters:
StepName: 'Download Pipeline Artifact - NuGet'
ArtifactName: 'onnxruntime-win-x64-cuda'
TargetPath: '$(Build.BinariesDirectory)/nuget-artifact'
SpecificArtifact: ${{ parameters.SpecificArtifact }}
BuildId: ${{ parameters.BuildId }}

- task: DownloadPipelineArtifact@2
displayName: 'Download Pipeline Artifact - NuGet'
inputs:
artifactName: 'onnxruntime-linux-x64-cuda'
targetPath: '$(Build.BinariesDirectory)/nuget-artifact'
- template: templates/flex-downloadPipelineArtifact.yml
parameters:
StepName: 'Download Pipeline Artifact - NuGet'
ArtifactName: 'onnxruntime-win-x64-tensorrt'
TargetPath: '$(Build.BinariesDirectory)/nuget-artifact'
SpecificArtifact: ${{ parameters.SpecificArtifact }}
BuildId: ${{ parameters.BuildId }}

- task: DownloadPipelineArtifact@2
displayName: 'Download Pipeline Artifact - NuGet'
inputs:
artifactName: 'onnxruntime-linux-x64-tensorrt'
targetPath: '$(Build.BinariesDirectory)/nuget-artifact'
- template: templates/flex-downloadPipelineArtifact.yml
parameters:
StepName: 'Download Pipeline Artifact - NuGet'
ArtifactName: 'onnxruntime-linux-x64-cuda'
TargetPath: '$(Build.BinariesDirectory)/nuget-artifact'
SpecificArtifact: ${{ parameters.SpecificArtifact }}
BuildId: ${{ parameters.BuildId }}

- task: DownloadPipelineArtifact@2
displayName: 'Download Pipeline Artifact - NuGet'
inputs:
artifactName: 'drop-extra'
targetPath: '$(Build.BinariesDirectory)/extra-artifact'
- template: templates/flex-downloadPipelineArtifact.yml
parameters:
StepName: 'Download Pipeline Artifact - NuGet'
ArtifactName: 'onnxruntime-linux-x64-tensorrt'
TargetPath: '$(Build.BinariesDirectory)/nuget-artifact'
SpecificArtifact: ${{ parameters.SpecificArtifact }}
BuildId: ${{ parameters.BuildId }}

- template: templates/flex-downloadPipelineArtifact.yml
parameters:
StepName: 'Download Pipeline Artifact - NuGet'
ArtifactName: 'drop-extra'
TargetPath: '$(Build.BinariesDirectory)/extra-artifact'
SpecificArtifact: ${{ parameters.SpecificArtifact }}
BuildId: ${{ parameters.BuildId }}

# Reconstruct the build dir
- task: PowerShell@2
Expand Down Expand Up @@ -795,14 +806,32 @@ stages:
DoEsrp: ${{ parameters.DoEsrp }}

- task: MSBuild@1
displayName: 'Build Nuget Packages'
displayName: 'Build Primary Nuget Packages Microsoft.ML.OnnxRuntime.Gpu'
inputs:
solution: '$(Build.SourcesDirectory)\csharp\OnnxRuntime.CSharp.proj'
configuration: RelWithDebInfo
platform: 'Any CPU'
msbuildArguments: '-t:CreatePackage -p:OnnxRuntimeBuildDirectory="$(Build.BinariesDirectory)" -p:OrtPackageId=Microsoft.ML.OnnxRuntime.Gpu -p:IsReleaseBuild=${{ parameters.IsReleaseBuild }} -p:ReleaseVersionSuffix=$(ReleaseVersionSuffix)'
workingDirectory: '$(Build.SourcesDirectory)\csharp'

- task: MSBuild@1
displayName: 'Build Nuget Packages Microsoft.ML.OnnxRuntime.Gpu-win'
mszhanyi marked this conversation as resolved.
Show resolved Hide resolved
inputs:
solution: '$(Build.SourcesDirectory)\csharp\OnnxRuntime.CSharp.proj'
configuration: RelWithDebInfo
platform: 'Any CPU'
msbuildArguments: '-t:CreatePackage -p:OnnxRuntimeBuildDirectory="$(Build.BinariesDirectory)" -p:OrtPackageId=Microsoft.ML.OnnxRuntime.Gpu-win -p:IsReleaseBuild=${{ parameters.IsReleaseBuild }} -p:ReleaseVersionSuffix=$(ReleaseVersionSuffix)'
workingDirectory: '$(Build.SourcesDirectory)\csharp'

- task: MSBuild@1
displayName: 'Build Nuget Packages Microsoft.ML.OnnxRuntime.Gpu-linux'
inputs:
solution: '$(Build.SourcesDirectory)\csharp\OnnxRuntime.CSharp.proj'
configuration: RelWithDebInfo
platform: 'Any CPU'
msbuildArguments: '-t:CreatePackage -p:OnnxRuntimeBuildDirectory="$(Build.BinariesDirectory)" -p:OrtPackageId=Microsoft.ML.OnnxRuntime.Gpu-linux -p:IsReleaseBuild=${{ parameters.IsReleaseBuild }} -p:ReleaseVersionSuffix=$(ReleaseVersionSuffix)'
workingDirectory: '$(Build.SourcesDirectory)\csharp'

- task: BatchScript@1
displayName: 'Add TensorRT header file to the native nuGet package'
inputs:
Expand Down Expand Up @@ -836,13 +865,13 @@ stages:
FolderPath: '$(Build.ArtifactStagingDirectory)'
DoEsrp: ${{ parameters.DoEsrp }}

- template: templates/validate-package.yml
parameters:
PackageType: 'nuget'
PackagePath: '$(Build.ArtifactStagingDirectory)'
PackageName: 'Microsoft.ML.OnnxRuntime.*nupkg'
PlatformsSupported: 'win-x64,linux-x64'
VerifyNugetSigning: false
#- template: templates/validate-package.yml
# parameters:
# PackageType: 'nuget'
# PackagePath: '$(Build.ArtifactStagingDirectory)'
# PackageName: 'Microsoft.ML.OnnxRuntime.*nupkg'
# PlatformsSupported: 'win-x64,linux-x64'
# VerifyNugetSigning: false

- task: PublishPipelineArtifact@0
displayName: 'Publish Pipeline NuGet Artifact'
Expand Down
54 changes: 36 additions & 18 deletions tools/nuget/generate_nuspec_for_native_nuget.py
mszhanyi marked this conversation as resolved.
Show resolved Hide resolved
mszhanyi marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -39,15 +39,18 @@
# Currently we take onnxruntime_providers_cuda from CUDA build
# And onnxruntime, onnxruntime_providers_shared and
# onnxruntime_providers_tensorrt from tensorrt build
def is_this_file_needed(ep, filename):
# cuda binaries are in dependent packages not in Microst.ML.OnnxRuntime.Gpu
mszhanyi marked this conversation as resolved.
Show resolved Hide resolved
def is_this_file_needed(ep, filename, package_name):
if package_name == "Microsoft.ML.OnnxRuntime.Gpu":
return False
return (ep != "cuda" or "cuda" in filename) and (ep != "tensorrt" or "cuda" not in filename)


# nuget_artifacts_dir: the directory with uncompressed C API tarball/zip files
# ep: cuda, tensorrt, None
# files_list: a list of xml string pieces to append
# This function has no return value. It updates files_list directly
def generate_file_list_for_ep(nuget_artifacts_dir, ep, files_list, include_pdbs, is_training_package):
def generate_file_list_for_ep(nuget_artifacts_dir, ep, files_list, include_pdbs, is_training_package, package_name):
for child in nuget_artifacts_dir.iterdir():
if not child.is_dir():
continue
Expand All @@ -57,7 +60,7 @@
child = child / "lib" # noqa: PLW2901
for child_file in child.iterdir():
suffixes = [".dll", ".lib", ".pdb"] if include_pdbs else [".dll", ".lib"]
if child_file.suffix in suffixes and is_this_file_needed(ep, child_file.name):
if child_file.suffix in suffixes and is_this_file_needed(ep, child_file.name, package_name) and package_name != "Microsoft.ML.OnnxRuntime.Gpu-linux":
files_list.append(
'<file src="' + str(child_file) + '" target="runtimes/win-%s/native"/>' % cpu_arch
)
Expand All @@ -83,7 +86,7 @@
for child_file in child.iterdir():
if not child_file.is_file():
continue
if child_file.suffix == ".so" and is_this_file_needed(ep, child_file.name):
if child_file.suffix == ".so" and is_this_file_needed(ep, child_file.name, package_name) and package_name != "Microsoft.ML.OnnxRuntime.Gpu-win":
files_list.append(
'<file src="' + str(child_file) + '" target="runtimes/linux-%s/native"/>' % cpu_arch
)
Expand Down Expand Up @@ -193,6 +196,14 @@
line_list.append('<repository type="git" url="' + repo_url + '"' + ' commit="' + commit_id + '" />')


def add_common_dependencies(xml_text, package_name, version):
dependent_packages = True if package_name == "Microsoft.ML.OnnxRuntime.Gpu-win" or package_name == "Microsoft.ML.OnnxRuntime.Gpu-linux" else False
Fixed Show fixed Hide fixed
if not dependent_packages:
xml_text.append('<dependency id="Microsoft.ML.OnnxRuntime.Managed"' + ' version="' + version + '"/>')
if package_name == "Microsoft.ML.OnnxRuntime.Gpu":
xml_text.append('<dependency id="Microsoft.ML.OnnxRuntime.Gpu-win"' + ' version="' + version + '"/>')
xml_text.append('<dependency id="Microsoft.ML.OnnxRuntime.Gpu-linux"' + ' version="' + version + '"/>')

def generate_dependencies(xml_text, package_name, version):
dml_dependency = '<dependency id="Microsoft.AI.DirectML" version="1.12.1"/>'

Expand All @@ -215,21 +226,22 @@
include_dml = package_name == "Microsoft.ML.OnnxRuntime.DirectML"

xml_text.append("<dependencies>")

# Support .Net Core
xml_text.append('<group targetFramework="NETCOREAPP">')
xml_text.append('<dependency id="Microsoft.ML.OnnxRuntime.Managed"' + ' version="' + version + '"/>')
add_common_dependencies(xml_text, package_name, version)
if include_dml:
xml_text.append(dml_dependency)
xml_text.append("</group>")
# Support .Net Standard
xml_text.append('<group targetFramework="NETSTANDARD">')
xml_text.append('<dependency id="Microsoft.ML.OnnxRuntime.Managed"' + ' version="' + version + '"/>')
add_common_dependencies(xml_text, package_name, version)
if include_dml:
xml_text.append(dml_dependency)
xml_text.append("</group>")
# Support .Net Framework
xml_text.append('<group targetFramework="NETFRAMEWORK">')
xml_text.append('<dependency id="Microsoft.ML.OnnxRuntime.Managed"' + ' version="' + version + '"/>')
add_common_dependencies(xml_text, package_name, version)
if include_dml:
xml_text.append(dml_dependency)
xml_text.append("</group>")
Expand Down Expand Up @@ -324,6 +336,8 @@
]
is_mklml_package = args.package_name == "Microsoft.ML.OnnxRuntime.MKLML"
is_cuda_gpu_package = args.package_name == "Microsoft.ML.OnnxRuntime.Gpu"
is_cuda_gpu_win_package = args.package_name == "Microsoft.ML.OnnxRuntime.Gpu-win"
is_cuda_gpu_linux_package = args.package_name == "Microsoft.ML.OnnxRuntime.Gpu-linux"
mszhanyi marked this conversation as resolved.
Show resolved Hide resolved
is_rocm_gpu_package = args.package_name == "Microsoft.ML.OnnxRuntime.ROCm"
is_dml_package = args.package_name == "Microsoft.ML.OnnxRuntime.DirectML"
is_windowsai_package = args.package_name == "Microsoft.AI.MachineLearning"
Expand Down Expand Up @@ -389,23 +403,25 @@
runtimes = f'{runtimes_target}{args.target_architecture}\\{runtimes_native_folder}"'

# Process headers
build_dir = "buildTransitive" if "Microsoft.ML.OnnxRuntime.Gpu" in args.package_name else "build"
mszhanyi marked this conversation as resolved.
Show resolved Hide resolved
include_dir = f"{build_dir}\\native\\include"
files_list.append(
"<file src="
+ '"'
+ os.path.join(args.sources_path, "include\\onnxruntime\\core\\session\\onnxruntime_*.h")
+ '" target="build\\native\\include" />'
+ '" target="' + include_dir + '" />'
)
files_list.append(
"<file src="
+ '"'
+ os.path.join(args.sources_path, "include\\onnxruntime\\core\\framework\\provider_options.h")
+ '" target="build\\native\\include" />'
+ '" target="' + include_dir + '" />'
)
files_list.append(
"<file src="
+ '"'
+ os.path.join(args.sources_path, "include\\onnxruntime\\core\\providers\\cpu\\cpu_provider_factory.h")
+ '" target="build\\native\\include" />'
+ '" target="' + include_dir + '" />'
)

if is_training_package:
Expand Down Expand Up @@ -531,14 +547,14 @@
if nuget_artifacts_dir.exists():
# Code path for ADO build pipeline, the files under 'nuget-artifacts' are
# downloaded from other build jobs
if is_cuda_gpu_package:
if is_cuda_gpu_package or is_cuda_gpu_win_package or is_cuda_gpu_linux_package:
ep_list = ["tensorrt", "cuda", None]
elif is_rocm_gpu_package:
ep_list = ["rocm", None]
else:
ep_list = [None]
for ep in ep_list:
generate_file_list_for_ep(nuget_artifacts_dir, ep, files_list, include_pdbs, is_training_package)
generate_file_list_for_ep(nuget_artifacts_dir, ep, files_list, include_pdbs, is_training_package, args.package_name)
is_ado_packaging_build = True
else:
# Code path for local dev build
Expand Down Expand Up @@ -732,7 +748,7 @@
+ '\\native" />'
)

if args.execution_provider == "cuda" or is_cuda_gpu_package and not is_ado_packaging_build:
if args.execution_provider == "cuda" or is_cuda_gpu_win_package and not is_ado_packaging_build:
files_list.append(
"<file src="
+ '"'
Expand Down Expand Up @@ -838,7 +854,7 @@
windowsai_rules = "Microsoft.AI.MachineLearning.Rules.Project.xml"
windowsai_native_rules = os.path.join(args.sources_path, "csharp", "src", windowsai_src, windowsai_rules)
windowsai_native_targets = os.path.join(args.sources_path, "csharp", "src", windowsai_src, windowsai_targets)
build = "build\\native"
build = f"{build_dir}\\native"
files_list.append("<file src=" + '"' + windowsai_native_props + '" target="' + build + '" />')
# Process native targets
files_list.append("<file src=" + '"' + windowsai_native_targets + '" target="' + build + '" />')
Expand Down Expand Up @@ -877,9 +893,10 @@
args.package_name + ".props",
)
os.system(copy_command + " " + source_props + " " + target_props)
files_list.append("<file src=" + '"' + target_props + '" target="build\\native" />')
files_list.append("<file src=" + '"' + target_props + '" target="' + build_dir + '\\native" />')
if not is_snpe_package and not is_qnn_package:
files_list.append("<file src=" + '"' + target_props + '" target="build\\netstandard2.0" />')
files_list.append("<file src=" + '"' + target_props + '" target="' + build_dir + '\\netstandard2.0" />')
files_list.append("<file src=" + '"' + target_props + '" target="' + build_dir + '\\netstandard2.1" />')

# Process targets file
source_targets = os.path.join(
Expand All @@ -895,9 +912,10 @@
args.package_name + ".targets",
)
os.system(copy_command + " " + source_targets + " " + target_targets)
files_list.append("<file src=" + '"' + target_targets + '" target="build\\native" />')
files_list.append("<file src=" + '"' + target_targets + '" target="' + build_dir + '\\native" />')
if not is_snpe_package and not is_qnn_package:
files_list.append("<file src=" + '"' + target_targets + '" target="build\\netstandard2.0" />')
files_list.append("<file src=" + '"' + target_targets + '" target="' + build_dir + '\\netstandard1.1" />')
mszhanyi marked this conversation as resolved.
Show resolved Hide resolved
files_list.append("<file src=" + '"' + target_targets + '" target="' + build_dir + '\\netstandard2.1" />')

# Process xamarin targets files
if args.package_name == "Microsoft.ML.OnnxRuntime":
Expand Down
Loading