-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable Latent Consistency models ONNX export #1469
Conversation
The documentation is not available anymore as the PR was closed or merged. |
@@ -435,7 +435,7 @@ def ordered_inputs(self, model: Union["PreTrainedModel", "TFPreTrainedModel"]) - | |||
sig = inspect.signature(model.call) | |||
|
|||
for param in sig.parameters: | |||
param_regex = re.compile(rf"{param}(\.\d*)?") | |||
param_regex = re.compile(rf"{param}(\..*)?$") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this modification comes from timestep
matching both timestep
and timestep_cond
previously (behavior that we don't want), we still want past_key_value
to match past_key_values.0.key
though
5f16642
to
683b39c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @echarlaix !
|
||
@parameterized.expand(SUPPORTED_ARCHITECTURES) | ||
@require_diffusers | ||
@unittest.skipIf(parse(_diffusers_version) <= Version("0.21.4"), "not supported with this diffusers version") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I would specify the minimal diffusers
version needed in the skip message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, will add it !
Add Latent Consistency models (added in
diffusers
in huggingface/diffusers#5438 and huggingface/diffusers#5448 ) ONNX export and pipeline to enable ONNX Runtime inference for text-to-imageAlso enable ONNX export using the CLI :
optimum-cli export onnx --model SimianLuo/LCM_Dreamshaper_v7 lcm_onnx/
OpenVINO integration : huggingface/optimum-intel#463