Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ModelProto support for transformers optimize_model #19990

Merged
merged 11 commits into from
Mar 23, 2024

Conversation

xiaoyu-work
Copy link
Contributor

Description

Add ModelProto support as an input to transformers optimize_model API.

Motivation and Context

Currently, the optimize_model API only accepts a model path as the input model. However, for large models, saving and loading from disk can be time-consuming. By adding ModelProto as an input option to the optimize_model API, significant time can be saved.

justinchuby
justinchuby previously approved these changes Mar 20, 2024
Copy link
Contributor

@justinchuby justinchuby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with nits. Thanks!

onnxruntime/python/tools/transformers/optimizer.py Outdated Show resolved Hide resolved
onnxruntime/python/tools/transformers/optimizer.py Outdated Show resolved Hide resolved
onnxruntime/python/tools/transformers/optimizer.py Outdated Show resolved Hide resolved
onnxruntime/python/tools/transformers/optimizer.py Outdated Show resolved Hide resolved
onnxruntime/python/tools/transformers/optimizer.py Outdated Show resolved Hide resolved
onnxruntime/python/tools/transformers/optimizer.py Outdated Show resolved Hide resolved
@tianleiwu
Copy link
Contributor

tianleiwu commented Mar 20, 2024

The python format pipeline failed.

Need run lint runner (https://github.com/microsoft/onnxruntime/blob/main/docs/Coding_Conventions_and_Standards.md#linting) like

pip install -r requirements-lintrunner.txt
pip install lintrunner lintrunner-adapters
lintrunner init
lintrunner -a

justinchuby
justinchuby previously approved these changes Mar 21, 2024
onnxruntime/python/tools/transformers/optimizer.py Outdated Show resolved Hide resolved
@tianleiwu
Copy link
Contributor

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline

Copy link

Azure Pipelines successfully started running 8 pipeline(s).

Copy link

Azure Pipelines successfully started running 10 pipeline(s).

@tianleiwu
Copy link
Contributor

/azp run Big Models

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@tianleiwu
Copy link
Contributor

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline

Copy link

Azure Pipelines successfully started running 10 pipeline(s).

Copy link

Azure Pipelines successfully started running 8 pipeline(s).

@@ -40,6 +41,9 @@
from onnx_model_unet import UnetOnnxModel
from onnx_model_vae import VaeOnnxModel

import onnxruntime

Check notice

Code scanning / CodeQL

Module is imported with 'import' and 'import from' Note

Module 'onnxruntime' is imported with both 'import' and 'import from'.
@tianleiwu
Copy link
Contributor

/azp run Big Models

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@justinchuby justinchuby merged commit 71551da into microsoft:main Mar 23, 2024
73 checks passed
TedThemistokleous pushed a commit to TedThemistokleous/onnxruntime that referenced this pull request May 7, 2024
### Description
Add `ModelProto` support as an input to transformers `optimize_model`
API.



### Motivation and Context
Currently, the `optimize_model` API only accepts a model path as the
input model. However, for large models, saving and loading from disk can
be time-consuming. By adding `ModelProto` as an input option to the
`optimize_model` API, significant time can be saved.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants