Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge changes #144

Merged
merged 30 commits into from
Feb 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
98b6bee
[Community Pipeline][Bug Fix] marigold_depth_estimation: input image …
markkua Feb 9, 2024
ca9ed5e
[LoRA] deprecate certain lora methods from the old backend. (#6889)
sayakpaul Feb 9, 2024
7c8cab3
post release 0.26.2 (#6885)
sayakpaul Feb 9, 2024
0071478
allow attention processors to have different signatures (#6915)
yiyixuxu Feb 10, 2024
f275625
Fix a bug in `AutoPipeline.from_pipe` when switching pipeline with op…
yiyixuxu Feb 10, 2024
35fd84b
Replace hardcoded values in SchedulerCommonTest with properties (#5479)
dg845 Feb 11, 2024
8772496
[Model Card] standardize T2I model card (#6939)
cosmo3769 Feb 12, 2024
06a042c
[Model Card] standardize T2I Lora model card (#6940)
cosmo3769 Feb 12, 2024
6f33665
[Model Card] standardize T2I Sdxl model card (#6942)
cosmo3769 Feb 12, 2024
84905ca
Update PixArt Alpha test module to match src module (#6943)
DN6 Feb 12, 2024
e1bdcc7
[Model Card] standardize T2I Sdxl Lora model card (#6944)
cosmo3769 Feb 12, 2024
9254d1f
Pass device to enable_model_cpu_offload in maybe_free_model_hooks (#6…
Disty0 Feb 12, 2024
215e680
Unpin torch versions in CI (#6945)
DN6 Feb 12, 2024
75aee39
[Model Card] standardize T2I Adapter Sdxl model card (#6947)
cosmo3769 Feb 12, 2024
371f765
[Diffusers -> Original SD conversion] fix things (#6933)
sayakpaul Feb 12, 2024
0a1daad
[docs] Community pipelines (#6929)
stevhliu Feb 12, 2024
4b89aef
[Type annotations] fixed in save_model_card (#6948)
cosmo3769 Feb 13, 2024
e7696e2
Updated lora inference instructions (#6913)
AlexUmnov Feb 13, 2024
a326d61
Fix configuring VAE from single file mixin (#6950)
DN6 Feb 13, 2024
9ea62d1
[DPMSolverSinglestepScheduler] correct `get_order_list` for `solver_o…
yiyixuxu Feb 13, 2024
30bcda7
Fix flaky IP Adapter test (#6960)
DN6 Feb 13, 2024
40dd9cb
Move SDXL T2I Adapter lora test into PEFT workflow (#6965)
DN6 Feb 13, 2024
3cf4f9c
Allow passing `config_file` argument to ControlNetModel when using `f…
DN6 Feb 13, 2024
0ca7b68
[`PEFT` / `docs`] Add a note about torch.compile (#6864)
younesbelkada Feb 14, 2024
4343ce2
[Core] Harmonize single file ckpt model loading (#6971)
sayakpaul Feb 14, 2024
37b0951
fix: controlnet inpaint single file. (#6975)
sayakpaul Feb 14, 2024
9efe1e5
[docs] IP-Adapter (#6897)
stevhliu Feb 14, 2024
2e387da
fix IPAdapter unload_ip_adapter test (#6972)
yiyixuxu Feb 15, 2024
8f2c7b4
[advanced sdxl lora script] - fix #6967 bug when using prior preserva…
linoytsaban Feb 15, 2024
e6d1728
[IP Adapters] feat: allow low_cpu_mem_usage in ip adapter loading (#6…
sayakpaul Feb 15, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 0 additions & 13 deletions .github/workflows/pr_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,6 @@ jobs:
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_cpu_models_schedulers
- name: LoRA
framework: lora
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_cpu_lora
- name: Fast Flax CPU tests
framework: flax
runner: docker-cpu
Expand Down Expand Up @@ -94,14 +89,6 @@ jobs:
--make-reports=tests_${{ matrix.config.report }} \
tests/models tests/schedulers tests/others

- name: Run fast PyTorch LoRA CPU tests
if: ${{ matrix.config.framework == 'lora' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx and not Dependency" \
--make-reports=tests_${{ matrix.config.report }} \
tests/lora

- name: Run fast Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
run: |
Expand Down
6 changes: 3 additions & 3 deletions docker/diffusers-pytorch-compile-cuda/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@ ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3.9 -m pip install --no-cache-dir --upgrade pip && \
python3.9 -m pip install --no-cache-dir \
torch==2.1.2 \
torchvision==0.16.2 \
torchaudio==2.1.2 \
torch \
torchvision \
torchaudio \
invisible_watermark && \
python3.9 -m pip install --no-cache-dir \
accelerate \
Expand Down
6 changes: 3 additions & 3 deletions docker/diffusers-pytorch-cpu/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
torch==2.1.2 \
torchvision==0.16.2 \
torchaudio==2.1.2 \
torch \
torchvision \
torchaudio \
invisible_watermark \
--extra-index-url https://download.pytorch.org/whl/cpu && \
python3 -m pip install --no-cache-dir \
Expand Down
6 changes: 3 additions & 3 deletions docker/diffusers-pytorch-cuda/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
torch==2.1.2 \
torchvision==0.16.2 \
torchaudio==2.1.2 \
torch \
torchvision \
torchaudio \
invisible_watermark && \
python3 -m pip install --no-cache-dir \
accelerate \
Expand Down
6 changes: 3 additions & 3 deletions docker/diffusers-pytorch-xformers-cuda/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ ENV PATH="/opt/venv/bin:$PATH"
# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
torch==2.1.2 \
torchvision==0.16.2 \
torchaudio==2.1.2 \
torch \
torchvision \
torchaudio \
invisible_watermark && \
python3 -m pip install --no-cache-dir \
accelerate \
Expand Down
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,8 @@
- sections:
- local: using-diffusers/textual_inversion_inference
title: Textual inversion
- local: using-diffusers/ip_adapter
title: IP-Adapter
- local: training/distributed_inference
title: Distributed inference with multiple GPUs
- local: using-diffusers/reusing_seeds
Expand Down
4 changes: 2 additions & 2 deletions docs/source/en/api/loaders/ip_adapter.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ specific language governing permissions and limitations under the License.

# IP-Adapter

[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs.
[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder.

<Tip>

Learn how to load an IP-Adapter checkpoint and image in the [IP-Adapter](../../using-diffusers/loading_adapters#ip-adapter) loading guide.
Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter [loading](../../using-diffusers/loading_adapters#ip-adapter) guide, and you can see how to use it in the [usage](../../using-diffusers/ip_adapter) guide.

</Tip>

Expand Down
19 changes: 19 additions & 0 deletions docs/source/en/tutorials/using_peft_for_inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,25 @@ list_adapters_component_wise
{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]}
```

## Compatibility with `torch.compile`

If you want to compile your model with `torch.compile` make sure to first fuse the LoRA weights into the base model and unload them.

```py
pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy")

pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0])
# Fuses the LoRAs into the Unet
pipe.fuse_lora()
pipe.unload_lora_weights()

pipe = torch.compile(pipe)

prompt = "toy_face of a hacker with a hoodie, pixel art"
image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0]
```

## Fusing adapters into the model

You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the [`~diffusers.loaders.LoraLoaderMixin.fuse_lora`] method, which can lead to a speed-up in inference and lower VRAM usage.
Expand Down
54 changes: 54 additions & 0 deletions docs/source/en/using-diffusers/custom_pipeline_overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,60 @@ pipeline = DiffusionPipeline.from_pretrained(
)
```

### Load from a local file

Community pipelines can also be loaded from a local file if you pass a file path instead. The path to the passed directory must contain a `pipeline.py` file that contains the pipeline class in order to successfully load it.

```py
pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
custom_pipeline="./path/to/pipeline_directory/",
clip_model=clip_model,
feature_extractor=feature_extractor,
use_safetensors=True,
)
```

### Load from a specific version

By default, community pipelines are loaded from the latest stable version of Diffusers. To load a community pipeline from another version, use the `custom_revision` parameter.

<hfoptions id="version">
<hfoption id="main">

For example, to load from the `main` branch:

```py
pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
custom_pipeline="clip_guided_stable_diffusion",
custom_revision="main",
clip_model=clip_model,
feature_extractor=feature_extractor,
use_safetensors=True,
)
```

</hfoption>
<hfoption id="older version">

For example, to load from a previous version of Diffusers like `v0.25.0`:

```py
pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
custom_pipeline="clip_guided_stable_diffusion",
custom_revision="v0.25.0",
clip_model=clip_model,
feature_extractor=feature_extractor,
use_safetensors=True,
)
```

</hfoption>
</hfoptions>


For more information about community pipelines, take a look at the [Community pipelines](custom_pipeline_examples) guide for how to use them and if you're interested in adding a community pipeline check out the [How to contribute a community pipeline](contribute_pipeline) guide!

## Community components
Expand Down
Loading
Loading