Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I get the graph after subgraph fusion stage (graph transformer)? #18451

Closed
fengyuentau opened this issue Nov 15, 2023 · 4 comments
Closed

Comments

@fengyuentau
Copy link

Describe the issue

I want to get the graph after graph transformers are applied. How can I do that?

To reproduce

N/A

Urgency

No response

Platform

Mac

OS Version

Ventura 13.5.1

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.16.1

ONNX Runtime API

Python

Architecture

ARM64

Execution Provider

Default CPU

Execution Provider Library Version

No response

@satyajandhyala
Copy link
Contributor

satyajandhyala commented Nov 15, 2023

Please try specifying optimized_model_filepath in SessionOption.
https://onnxruntime.ai/docs/api/python/api_summary.html#:~:text=property-,optimized_model_filepath,-%23

@fengyuentau
Copy link
Author

Could you also give an example how to use these options?

@justinchuby
Copy link
Contributor

justinchuby commented Nov 16, 2023

In the documentation linked above there is

options = onnxruntime.SessionOptions()
options.optimized_model_filepath = "optimized.onnx"
session = onnxruntime.InferenceSession(
    'model.onnx',
    sess_options=options,
    providers=['CUDAExecutionProvider', 'CPUExecutionProvider']
)

@fengyuentau
Copy link
Author

Thank you all for the help.


@justinchuby There is an extra parenthesis ")" in the end of your example.

Also worth mentioning that default options.graph_optimization_level is ort.GraphOptimizationLevel.ORT_ENABLE_ALL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants