Skip to content

Commit

Permalink
doc(diffusers): update docs for v0.30 upgrade
Browse files Browse the repository at this point in the history
  • Loading branch information
townwish4git committed Nov 8, 2024
1 parent 60c5f39 commit 535d05e
Show file tree
Hide file tree
Showing 31 changed files with 498 additions and 50 deletions.
14 changes: 14 additions & 0 deletions docs/diffusers/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -175,8 +175,12 @@
title: UVQModel
- local: api/models/autoencoderkl
title: AutoEncoderKL
- local: api/models/autoencoderkl_cogvideox
title: AutoencoderKLCogVideoX
- local: api/models/asymmetricautoencoderkl
title: AsymmetricAutoEncoderKL
- local: api/models/stable_cascade_unet
title: StableCascadeUNet
- local: api/models/autoencoder_tiny
title: Tiny AutoEncoder
- local: api/models/consistency_decoder_vae
Expand All @@ -189,6 +193,10 @@
title: DiTTransformer2DModel
- local: api/models/hunyuan_transformer2d
title: HunyuanDiT2DModel
- local: api/models/flux_transformer
title: FluxTransformer2DModel
- local: api/models/cogvideox_transformer3d
title: CogVideoXTransformer3DModel
- local: api/models/transformer_temporal
title: TransformerTemporalModel
- local: api/models/sd3_transformer2d
Expand All @@ -208,6 +216,8 @@
title: AnimateDiff
- local: api/pipelines/blip_diffusion
title: BLIP-Diffusion
- local: api/pipelines/cogvideox
title: CogVideoX
- local: api/pipelines/consistency_models
title: Consistency Models
- local: api/pipelines/controlnet
Expand All @@ -230,6 +240,8 @@
title: DiffEdit
- local: api/pipelines/dit
title: DiT
- local: api/pipelines/flux
title: Flux
- local: api/pipelines/hunyuandit
title: Hunyuan-DiT
- local: api/pipelines/i2vgenxl
Expand Down Expand Up @@ -325,6 +337,8 @@
title: EulerDiscreteScheduler
- local: api/schedulers/flow_match_euler_discrete
title: FlowMatchEulerDiscreteScheduler
- local: api/schedulers/flow_match_heun_discrete
title: FlowMatchHeunDiscreteScheduler
- local: api/schedulers/heun
title: HeunDiscreteScheduler
- local: api/schedulers/ipndm
Expand Down
16 changes: 11 additions & 5 deletions docs/diffusers/api/loaders/lora.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,16 +12,22 @@ specific language governing permissions and limitations under the License.

# LoRA

LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights:
LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the denoiser, text encoder or both. The denoiser usually corresponds to a UNet (`UNet2DConditionModel`, for example) or a Transformer (`SD3Transformer2DModel`, for example). There are several classes for loading LoRA weights:

- `LoraLoaderMixin` provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model.
- `StableDiffusionXLLoraLoaderMixin` is a [Stable Diffusion (SDXL)](../../api/pipelines/stable_diffusion/stable_diffusion_xl.md) version of the `LoraLoaderMixin` class for loading and saving LoRA weights. It can only be used with the SDXL model.
- `StableDiffusionLoraLoaderMixin` provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model.
- `StableDiffusionXLLoraLoaderMixin` is a [Stable Diffusion (SDXL)](../../api/pipelines/stable_diffusion/stable_diffusion_xl.md) version of the `StableDiffusionLoraLoaderMixin` class for loading and saving LoRA weights. It can only be used with the SDXL model.
- `SD3LoraLoaderMixin` provides similar functions for [Stable Diffusion 3](../../api/pipelines/stable_diffusion/stable_diffusion_3.md)
- `LoraBaseMixin` provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.

!!! tip

To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters.md#lora) loading guide.


::: mindone.diffusers.loaders.lora.LoraLoaderMixin
::: mindone.diffusers.loaders.lora_pipeline.StableDiffusionLoraLoaderMixin

::: mindone.diffusers.loaders.lora.StableDiffusionXLLoraLoaderMixin
::: mindone.diffusers.loaders.lora_pipeline.StableDiffusionXLLoraLoaderMixin

::: mindone.diffusers.loaders.lora_pipeline.SD3LoraLoaderMixin

::: mindone.diffusers.loaders.lora_base.LoraBaseMixin
2 changes: 1 addition & 1 deletion docs/diffusers/api/loaders/peft.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ specific language governing permissions and limitations under the License.

# PEFT

Diffusers supports loading adapters such as [LoRA](../../using-diffusers/loading_adapters.md) with the [PEFT](https://huggingface.co/docs/peft/index) library with the [`loaders.peft.PeftAdapterMixin`](peft.md#mindone.diffusers.loaders.peft.PeftAdapterMixin) class. This allows modeling classes in Diffusers like [`UNet2DConditionModel`](../models/unet2d-cond.md#unet2dconditionmodel) to load an adapter.
Diffusers supports loading adapters such as [LoRA](../../using-diffusers/loading_adapters.md) with the [PEFT](https://huggingface.co/docs/peft/index) library with the [`loaders.peft.PeftAdapterMixin`](peft.md#mindone.diffusers.loaders.peft.PeftAdapterMixin) class. This allows modeling classes in Diffusers like [`UNet2DConditionModel`](../models/unet2d-cond.md#unet2dconditionmodel), [`SD3Transformer2DModel`](../models/sd3_transformer2d.md#sd3-transformer-model) to operate with an adapter.

!!! tip

Expand Down
7 changes: 6 additions & 1 deletion docs/diffusers/api/loaders/single_file.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,9 @@ The [`from_single_file`](single_file.md#mindone.diffusers.loaders.single_file.Fr

## Supported pipelines

- [`CogVideoXPipeline`](../pipelines/cogvideox.md)
- [`CogVideoXImageToVideoPipeline`](../pipelines/cogvideox.md)
- [`CogVideoXVideoToVideoPipeline`](../pipelines/cogvideox.md)
- [`StableDiffusionPipeline`](../pipelines/stable_diffusion/text2img.md)
- [`StableDiffusionImg2ImgPipeline`](../pipelines/stable_diffusion/text2img.md)
- [`StableDiffusionInpaintPipeline`](../pipelines/stable_diffusion/text2img.md)
Expand All @@ -44,9 +47,11 @@ The [`from_single_file`](single_file.md#mindone.diffusers.loaders.single_file.Fr
## Supported models

- [`UNet2DConditionModel`](../models/unet2d-cond.md)
- [`StableCascadeUNet`]()
- [`StableCascadeUNet`](../models/stable_cascade_unet.md)
- [`AutoencoderKL`](../models/autoencoderkl.md)
- [`AutoencoderKLCogVideoX`](../models/autoencoderkl_cogvideox.md)
- [`ControlNetModel`](../models/controlnet.md)
- [`SD3Transformer2DModel`](../models/sd3_transformer2d.md)
- [`FluxTransformer2DModel`](../models/flux_transformer.md)

::: mindone.diffusers.loaders.single_file.FromSingleFileMixin
2 changes: 1 addition & 1 deletion docs/diffusers/api/loaders/unet.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ specific language governing permissions and limitations under the License.

# UNet

Some training methods - like LoRA and Custom Diffusion - typically target the UNet's attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model's parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you're *only* loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the [`load_lora_weights`](lora.md#mindone.diffusers.loaders.lora.LoraLoaderMixin.load_lora_weights) function instead.
Some training methods - like LoRA and Custom Diffusion - typically target the UNet's attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model's parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you're *only* loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the [`load_lora_weights`](lora.md#mindone.diffusers.loaders.lora_pipeline.StableDiffusionLoraLoaderMixin.load_lora_weights) function instead.

The [`UNet2DConditionLoadersMixin`](unet.md#mindone.diffusers.loaders.unet.UNet2DConditionLoadersMixin) class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters.

Expand Down
29 changes: 29 additions & 0 deletions docs/diffusers/api/models/autoencoderkl_cogvideox.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. -->

# AutoencoderKLCogVideoX

The 3D variational autoencoder (VAE) model with KL loss used in [CogVideoX](https://github.com/THUDM/CogVideo) was introduced in [CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer](https://github.com/THUDM/CogVideo/blob/main/resources/CogVideoX.pdf) by Tsinghua University & ZhipuAI.

The model can be loaded with the following code snippet.

```python
from mindone.diffusers import AutoencoderKLCogVideoX

vae = AutoencoderKLCogVideoX.from_pretrained("THUDM/CogVideoX-2b", subfolder="vae", mindspore_dtype=mindspore.float16)
```


::: mindone.diffusers.models.autoencoders.autoencoder_kl_cogvideox.AutoencoderKLCogVideoX

::: mindone.diffusers.models.autoencoders.autoencoder_kl.AutoencoderKLOutput

::: mindone.diffusers.models.autoencoders.vae.DecoderOutput
27 changes: 27 additions & 0 deletions docs/diffusers/api/models/cogvideox_transformer3d.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. -->

# CogVideoXTransformer3DModel

A Diffusion Transformer model for 3D data from [CogVideoX](https://github.com/THUDM/CogVideo) was introduced in [CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer](https://github.com/THUDM/CogVideo/blob/main/resources/CogVideoX.pdf) by Tsinghua University & ZhipuAI.

The model can be loaded with the following code snippet.

```python
from mindone.diffusers import CogVideoXTransformer3DModel

vae = CogVideoXTransformer3DModel.from_pretrained("THUDM/CogVideoX-2b", subfolder="transformer", mindspore_dtype=mindspore.float16)
```


::: mindone.diffusers.models.transformers.cogvideox_transformer_3d.CogVideoXTransformer3DModel

::: mindone.diffusers.models.modeling_outputs.Transformer2DModelOutput
18 changes: 18 additions & 0 deletions docs/diffusers/api/models/flux_transformer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# FluxTransformer2DModel

A Transformer model for image-like data from [Flux](https://blackforestlabs.ai/announcing-black-forest-labs/).


::: mindone.diffusers.models.transformers.transformer_flux.FluxTransformer2DModel
18 changes: 18 additions & 0 deletions docs/diffusers/api/models/stable_cascade_unet.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# StableCascadeUNet

A UNet model from the [Stable Cascade pipeline](../pipelines/stable_cascade.md).


::: mindone.diffusers.models.unets.unet_stable_cascade.StableCascadeUNet
2 changes: 1 addition & 1 deletion docs/diffusers/api/pipelines/animatediff.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ The abstract of the paper is the following:
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---|
| [AnimateDiffPipeline](https://github.com/mindspore-lab/mindone/blob/master/mindone/diffusers/pipelines/animatediff/pipeline_animatediff.py) | *Text-to-Video Generation with AnimateDiff* |
| [AnimateDiffSDXLPipeline](https://github.com/mindspore-lab/mindone/blob/master/mindone/diffusers/pipelines/animatediff/pipeline_animatediff_sdxl.py) | *Video-to-Video Generation with AnimateDiff* |
| [AnimateDiffVideoToVideoPipeline](https://github.com/mindspore-lab/mindone/blob/master/mindone/diffusers/pipelines/animatediff/pipeline_animatediff_video2video.py) | *Video-to-Video Generation with AnimateDiff* |
[AnimateDiffVideoToVideoPipeline](https://github.com/mindspore-lab/mindone/blob/master/mindone/diffusers/pipelines/animatediff/pipeline_animatediff_video2video.py) | *Video-to-Video Generation with AnimateDiff* |

## Available checkpoints

Expand Down
77 changes: 77 additions & 0 deletions docs/diffusers/api/pipelines/cogvideox.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->

# CogVideoX

[CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer](https://arxiv.org/abs/2408.06072) from Tsinghua University & ZhipuAI, by Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, Da Yin, Xiaotao Gu, Yuxuan Zhang, Weihan Wang, Yean Cheng, Ting Liu, Bin Xu, Yuxiao Dong, Jie Tang.

The abstract from the paper is:

*We introduce CogVideoX, a large-scale diffusion transformer model designed for generating videos based on text prompts. To efficently model video data, we propose to levearge a 3D Variational Autoencoder (VAE) to compresses videos along both spatial and temporal dimensions. To improve the text-video alignment, we propose an expert transformer with the expert adaptive LayerNorm to facilitate the deep fusion between the two modalities. By employing a progressive training technique, CogVideoX is adept at producing coherent, long-duration videos characterized by significant motion. In addition, we develop an effectively text-video data processing pipeline that includes various data preprocessing strategies and a video captioning method. It significantly helps enhance the performance of CogVideoX, improving both generation quality and semantic alignment. Results show that CogVideoX demonstrates state-of-the-art performance across both multiple machine metrics and human evaluations. The model weight of CogVideoX-2B is publicly available at https://github.com/THUDM/CogVideo.*

!!! tip

Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.


This pipeline was contributed by [zRzRzRzRzRzRzR](https://github.com/zRzRzRzRzRzRzR). The original codebase can be found [here](https://huggingface.co/THUDM). The original weights can be found under [hf.co/THUDM](https://huggingface.co/THUDM).

There are two models available that can be used with the text-to-video and video-to-video CogVideoX pipelines:
- [`THUDM/CogVideoX-2b`](https://huggingface.co/THUDM/CogVideoX-2b): The recommended dtype for running this model is `fp16`.
- [`THUDM/CogVideoX-5b`](https://huggingface.co/THUDM/CogVideoX-5b): The recommended dtype for running this model is `bf16`.

There is one model available that can be used with the image-to-video CogVideoX pipeline:
- [`THUDM/CogVideoX-5b-I2V`](https://huggingface.co/THUDM/CogVideoX-5b-I2V): The recommended dtype for running this model is `bf16`.

## Inference

First, load the pipeline:

```python
import mindspore
from mindone.diffusers import CogVideoXPipeline, CogVideoXImageToVideoPipeline
from mindone.diffusers.utils import export_to_video,load_image
pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b") # or "THUDM/CogVideoX-2b"
```

If you are using the image-to-video pipeline, load it as follows:

```python
pipe = CogVideoXImageToVideoPipeline.from_pretrained("THUDM/CogVideoX-5b-I2V")
```

Run inference:

```python
# CogVideoX works well with long and well-described prompts
prompt = "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance."
video = pipe(prompt=prompt, guidance_scale=6, num_inference_steps=50)[0][0]
```

### Memory optimization

CogVideoX-2b requires about 19 GB of device memory to decode 49 frames (6 seconds of video at 8 FPS) with output resolution 720x480 (W x H), which makes it not possible to run on consumer devices or free-tier T4 Colab. The following memory optimizations could be used to reduce the memory footprint.

- `pipe.vae.enable_tiling()`:
- `pipe.vae.enable_slicing()`


::: mindone.diffusers.CogVideoXPipeline

::: mindone.diffusers.CogVideoXImageToVideoPipeline

::: mindone.diffusers.CogVideoXVideoToVideoPipeline

::: mindone.diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput
2 changes: 1 addition & 1 deletion docs/diffusers/api/pipelines/controlnet_sd3.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ This code is implemented by [The InstantX Team](https://huggingface.co/InstantX)

!!! tip

Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading.md#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.


::: mindone.diffusers.StableDiffusion3ControlNetPipeline
Expand Down
Loading

0 comments on commit 535d05e

Please sign in to comment.