-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable Latent Consistency models ONNX export #1469
Merged
Merged
Changes from all commits
Commits
Show all changes
23 commits
Select commit
Hold shift + click to select a range
ad93b7f
Enable export latent consistency model
echarlaix 057e576
add pipeline
echarlaix 8af29e8
format
echarlaix da9aaa5
fix docstring
echarlaix 3411b84
fix
echarlaix 2d0142d
format
echarlaix 6c54062
format
echarlaix 74b02d9
modify regex pattern
echarlaix ec1da51
remove constraint diffusers version
echarlaix 510db7e
fix typo
echarlaix d6cf152
fix regex
echarlaix af4f2e3
fix pipeline
echarlaix a44ed5a
fix infered task
echarlaix ac878ac
style
echarlaix 46dc653
add test
echarlaix 4180d1b
fix style
echarlaix 8d4069c
fix style
echarlaix 824fc57
add documentation
echarlaix 683b39c
fix
echarlaix 1604240
add precision for diffusers min version
echarlaix 583ba14
move import
echarlaix f968810
rm install from source
echarlaix d723e4b
format
echarlaix File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
230 changes: 230 additions & 0 deletions
230
optimum/pipelines/diffusers/pipeline_latent_consistency.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,230 @@ | ||
# Copyright 2023 The HuggingFace Team. All rights reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
import logging | ||
from typing import Callable, List, Optional, Union | ||
|
||
import numpy as np | ||
import torch | ||
from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput | ||
|
||
from .pipeline_stable_diffusion import StableDiffusionPipelineMixin | ||
|
||
|
||
logger = logging.getLogger(__name__) | ||
|
||
|
||
class LatentConsistencyPipelineMixin(StableDiffusionPipelineMixin): | ||
# Adapted from https://github.com/huggingface/diffusers/blob/v0.22.0/src/diffusers/pipelines/latent_consistency/pipeline_latent_consistency.py#L264 | ||
def __call__( | ||
self, | ||
prompt: Optional[Union[str, List[str]]] = None, | ||
height: Optional[int] = None, | ||
width: Optional[int] = None, | ||
num_inference_steps: int = 4, | ||
original_inference_steps: int = None, | ||
guidance_scale: float = 8.5, | ||
num_images_per_prompt: int = 1, | ||
generator: Optional[np.random.RandomState] = None, | ||
latents: Optional[np.ndarray] = None, | ||
prompt_embeds: Optional[np.ndarray] = None, | ||
output_type: str = "pil", | ||
return_dict: bool = True, | ||
callback: Optional[Callable[[int, int, np.ndarray], None]] = None, | ||
callback_steps: int = 1, | ||
): | ||
r""" | ||
Function invoked when calling the pipeline for generation. | ||
|
||
Args: | ||
prompt (`Optional[Union[str, List[str]]]`, defaults to None): | ||
The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. | ||
instead. | ||
height (`Optional[int]`, defaults to None): | ||
The height in pixels of the generated image. | ||
width (`Optional[int]`, defaults to None): | ||
The width in pixels of the generated image. | ||
num_inference_steps (`int`, defaults to 50): | ||
The number of denoising steps. More denoising steps usually lead to a higher quality image at the | ||
expense of slower inference. | ||
guidance_scale (`float`, defaults to 7.5): | ||
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). | ||
`guidance_scale` is defined as `w` of equation 2. of [Imagen | ||
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > | ||
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, | ||
usually at the expense of lower image quality. | ||
num_images_per_prompt (`int`, defaults to 1): | ||
The number of images to generate per prompt. | ||
generator (`Optional[np.random.RandomState]`, defaults to `None`):: | ||
A np.random.RandomState to make generation deterministic. | ||
latents (`Optional[np.ndarray]`, defaults to `None`): | ||
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image | ||
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents | ||
tensor will ge generated by sampling using the supplied random `generator`. | ||
prompt_embeds (`Optional[np.ndarray]`, defaults to `None`): | ||
Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not | ||
provided, text embeddings will be generated from `prompt` input argument. | ||
output_type (`str`, defaults to `"pil"`): | ||
The output format of the generate image. Choose between | ||
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. | ||
return_dict (`bool`, defaults to `True`): | ||
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a | ||
plain tuple. | ||
callback (Optional[Callable], defaults to `None`): | ||
A function that will be called every `callback_steps` steps during inference. The function will be | ||
called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. | ||
callback_steps (`int`, defaults to 1): | ||
The frequency at which the `callback` function will be called. If not specified, the callback will be | ||
called at every step. | ||
guidance_rescale (`float`, defaults to 0.0): | ||
Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are | ||
Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of | ||
[Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). | ||
Guidance rescale factor should fix overexposure when using zero terminal SNR. | ||
|
||
Returns: | ||
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: | ||
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. | ||
When returning a tuple, the first element is a list with the generated images, and the second element is a | ||
list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" | ||
(nsfw) content, according to the `safety_checker`. | ||
""" | ||
height = height or self.unet.config["sample_size"] * self.vae_scale_factor | ||
width = width or self.unet.config["sample_size"] * self.vae_scale_factor | ||
|
||
# Don't need to get negative prompts due to LCM guided distillation | ||
negative_prompt = None | ||
negative_prompt_embeds = None | ||
|
||
# check inputs. Raise error if not correct | ||
self.check_inputs( | ||
prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds | ||
) | ||
|
||
# define call parameters | ||
if isinstance(prompt, str): | ||
batch_size = 1 | ||
elif isinstance(prompt, list): | ||
batch_size = len(prompt) | ||
else: | ||
batch_size = prompt_embeds.shape[0] | ||
|
||
if generator is None: | ||
generator = np.random | ||
|
||
prompt_embeds = self._encode_prompt( | ||
prompt, | ||
num_images_per_prompt, | ||
False, | ||
negative_prompt, | ||
prompt_embeds=prompt_embeds, | ||
negative_prompt_embeds=negative_prompt_embeds, | ||
) | ||
|
||
# set timesteps | ||
self.scheduler.set_timesteps(num_inference_steps, original_inference_steps=original_inference_steps) | ||
timesteps = self.scheduler.timesteps | ||
|
||
latents = self.prepare_latents( | ||
batch_size * num_images_per_prompt, | ||
self.unet.config["in_channels"], | ||
height, | ||
width, | ||
prompt_embeds.dtype, | ||
generator, | ||
latents, | ||
) | ||
|
||
bs = batch_size * num_images_per_prompt | ||
# get Guidance Scale Embedding | ||
w = np.full(bs, guidance_scale - 1, dtype=prompt_embeds.dtype) | ||
w_embedding = self.get_guidance_scale_embedding( | ||
w, embedding_dim=self.unet.config["time_cond_proj_dim"], dtype=prompt_embeds.dtype | ||
) | ||
|
||
# Adapted from diffusers to extend it for other runtimes than ORT | ||
timestep_dtype = self.unet.input_dtype.get("timestep", np.float32) | ||
|
||
num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order | ||
for i, t in enumerate(self.progress_bar(timesteps)): | ||
timestep = np.array([t], dtype=timestep_dtype) | ||
noise_pred = self.unet( | ||
sample=latents, | ||
timestep=timestep, | ||
encoder_hidden_states=prompt_embeds, | ||
timestep_cond=w_embedding, | ||
)[0] | ||
|
||
# compute the previous noisy sample x_t -> x_t-1 | ||
latents, denoised = self.scheduler.step( | ||
torch.from_numpy(noise_pred), t, torch.from_numpy(latents), return_dict=False | ||
) | ||
latents, denoised = latents.numpy(), denoised.numpy() | ||
|
||
# call the callback, if provided | ||
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): | ||
if callback is not None and i % callback_steps == 0: | ||
callback(i, t, latents) | ||
|
||
if output_type == "latent": | ||
image = denoised | ||
has_nsfw_concept = None | ||
else: | ||
denoised /= self.vae_decoder.config["scaling_factor"] | ||
# it seems likes there is a strange result for using half-precision vae decoder if batchsize>1 | ||
image = np.concatenate( | ||
[self.vae_decoder(latent_sample=denoised[i : i + 1])[0] for i in range(denoised.shape[0])] | ||
) | ||
image, has_nsfw_concept = self.run_safety_checker(image) | ||
|
||
if has_nsfw_concept is None: | ||
do_denormalize = [True] * image.shape[0] | ||
else: | ||
do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] | ||
|
||
image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) | ||
|
||
if not return_dict: | ||
return (image, has_nsfw_concept) | ||
|
||
return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) | ||
|
||
# Adapted from https://github.com/huggingface/diffusers/blob/v0.22.0/src/diffusers/pipelines/latent_consistency/pipeline_latent_consistency.py#L264 | ||
def get_guidance_scale_embedding(self, w, embedding_dim=512, dtype=None): | ||
""" | ||
See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 | ||
|
||
Args: | ||
timesteps (`torch.Tensor`): | ||
generate embedding vectors at these timesteps | ||
embedding_dim (`int`, *optional*, defaults to 512): | ||
dimension of the embeddings to generate | ||
dtype: | ||
data type of the generated embeddings | ||
|
||
Returns: | ||
`torch.FloatTensor`: Embedding vectors with shape `(len(timesteps), embedding_dim)` | ||
""" | ||
w = w * 1000 | ||
half_dim = embedding_dim // 2 | ||
emb = np.log(10000.0) / (half_dim - 1) | ||
emb = np.exp(np.arange(half_dim, dtype=dtype) * -emb) | ||
emb = w[:, None] * emb[None, :] | ||
emb = np.concatenate([np.sin(emb), np.cos(emb)], axis=1) | ||
|
||
if embedding_dim % 2 == 1: # zero pad | ||
emb = np.pad(emb, [(0, 0), (0, 1)]) | ||
|
||
assert emb.shape == (w.shape[0], embedding_dim) | ||
return emb |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this modification comes from
timestep
matching bothtimestep
andtimestep_cond
previously (behavior that we don't want), we still wantpast_key_value
to matchpast_key_values.0.key
though