-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[VitisAI] Refactor the VAIEP to use MSFT's standalone API #19058
Conversation
@jywu-msft Hi, could you start the pipeline? |
@jywu-msft Could you start the pipeline when avaliable? It would means a lot for VitisAI. |
can you fix the python lint errors? |
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline |
Azure Pipelines successfully started running 8 pipeline(s). |
there are build failures in the Linux CPU Minimal and MacOS builds |
/azp run Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline, Linux OpenVINO CI Pipeline |
Azure Pipelines successfully started running 8 pipeline(s). |
/azp run Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline, Linux OpenVINO CI Pipeline |
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline |
Azure Pipelines successfully started running 8 pipeline(s). |
1 similar comment
Azure Pipelines successfully started running 8 pipeline(s). |
Thanks for this PR. I see you made the changes so that vitis ai ep can be built as a shared library and decoupled from onnxruntime.dll |
The factory creation methods of other shared libraries are also available by default in all versions. |
i'm wondering about these changes:
|
Similar to the above code, create_not_supported_status needs to be returned when ORT_MINIMAL_BUILD is defined. Because I have not customized provider_options , I put this code in the above function.When defining ORT_MINIMAL_BUILD I need to create VitisAIProviderFactoryCreator , so the corresponding header file is included.
|
https://onnxruntime.ai/docs/build/custom.html#minimal-build
since you haven't implemented any such api's in provider_bridge, you don't need to worry about that case. there's no api's implemented in provider_bridge that you need to add a stub for a minimal build. If you look at all the code above and below
they all have if defined for their respective EP's, i think VitisAI should do the same. (not for all non-minimal builds) |
@jywu-msft I made some modifications, can you start the pipeline when available? |
there are several options for supporting provider options. if all your options can be represented by string keys and values, you can just use the generic api. i.e. you don't need to implement a VitisAI specific api.
Otherwise, the other recommended pattern is using an opaque struct.
you'll see there are api's to create the struct and update the struct. older api's such as
required application to directly update OrtTensorRTProviderOptions struct which does not guarantee abi compatibility. |
@jywu-msft |
TensorRT EP, CUDA EP, OpenVINO EP and DNNL EP are all shared lib ep's and not statically compiled into onnxruntime.dll, so you can follow the same pattern they use. |
@jywu-msft I made some modifications, can you start the pipeline when available? |
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline |
Azure Pipelines successfully started running 8 pipeline(s). |
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline |
Azure Pipelines successfully started running 8 pipeline(s). |
/azp run Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline, Linux OpenVINO CI Pipeline |
Azure Pipelines successfully started running 8 pipeline(s). |
@jywu-msft Do you think there might be any other adjustments that could be considered? |
I will take another pass through the PR and get back to you. |
### Description <!-- Describe your changes. --> Resolving compilation errors when using USE_VITISAI ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> There will be compilation errors when USE_VITISAI is enabled This is in addition to the #19058 Co-authored-by: Zhenze Wang <[email protected]>
Description
Refactor the VAIEP to use MSFT's standalone API
Motivation and Context
Vitis ONNX RT VAI should switch to using the standalone API for ONNX EPs in order to decouple the EP from onnxruntime.dll and the providers.dll. This will help to simplify customer deployment of applications and use cases that need to share their onnxruntime.dll with other applications.