diff --git a/colabs/diffusers/sdxl-text-to-image.ipynb b/colabs/diffusers/sdxl-text-to-image.ipynb index 65663461..3050b661 100644 --- a/colabs/diffusers/sdxl-text-to-image.ipynb +++ b/colabs/diffusers/sdxl-text-to-image.ipynb @@ -148,7 +148,7 @@ "\n", "1. We define the base diffusion pipeline using `diffusers.DiffusionPipeline` and load the pre-trained weights for SDXL 1.0 by calling the `from_pretrained` function on it. We also pass the scheduler as `diffusers.EulerDiscreteScheduler` in this step.\n", "\n", - "2. In case we don't have a GPU with large enough GPU, it's recommended to enable CPU offloading. Otherwise, we load the model on the GPU. In case you're curious how HiggingFace manages CPU offloading in the most optimized manner, we recommend you read this port by [Sylvain Gugger](https://huggingface.co/sgugger): [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models).\n", + "2. In case we don't have a GPU with large enough GPU, it's recommended to enable CPU offloading. Otherwise, we load the model on the GPU. In case you're curious how HuggingFace manages CPU offloading in the most optimized manner, we recommend you read this port by [Sylvain Gugger](https://huggingface.co/sgugger): [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models).\n", "\n", "3. We can compile model using `torch.compile`, this might give a significant speedup.\n", "\n",