From ee4be520664cc97af96eb730d9083b3586e913eb Mon Sep 17 00:00:00 2001 From: Anshuman Mishra <51750587+shivance@users.noreply.github.com> Date: Sun, 1 Oct 2023 10:58:50 +0530 Subject: [PATCH] fix a minor typo (#474) --- colabs/diffusers/sdxl-text-to-image.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/colabs/diffusers/sdxl-text-to-image.ipynb b/colabs/diffusers/sdxl-text-to-image.ipynb index 65663461..3050b661 100644 --- a/colabs/diffusers/sdxl-text-to-image.ipynb +++ b/colabs/diffusers/sdxl-text-to-image.ipynb @@ -148,7 +148,7 @@ "\n", "1. We define the base diffusion pipeline using `diffusers.DiffusionPipeline` and load the pre-trained weights for SDXL 1.0 by calling the `from_pretrained` function on it. We also pass the scheduler as `diffusers.EulerDiscreteScheduler` in this step.\n", "\n", - "2. In case we don't have a GPU with large enough GPU, it's recommended to enable CPU offloading. Otherwise, we load the model on the GPU. In case you're curious how HiggingFace manages CPU offloading in the most optimized manner, we recommend you read this port by [Sylvain Gugger](https://huggingface.co/sgugger): [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models).\n", + "2. In case we don't have a GPU with large enough GPU, it's recommended to enable CPU offloading. Otherwise, we load the model on the GPU. In case you're curious how HuggingFace manages CPU offloading in the most optimized manner, we recommend you read this port by [Sylvain Gugger](https://huggingface.co/sgugger): [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models).\n", "\n", "3. We can compile model using `torch.compile`, this might give a significant speedup.\n", "\n",