You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ValueError: CLIPVisionModelWithProjection does not support device_map='auto'. To implement support, the model class needs to implement the _no_split_modules attribute.
#146
Open
Sosycs opened this issue
Dec 19, 2023
· 6 comments
Hello, I am trying to run the code on the colab provided. I have not change anything in the code yet.
after I ran the part:
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"DeepFloyd/IF-I-XL-v1.0",
text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder
unet=None,
device_map="auto"
)
I got the following error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-8-55e2717ce150>](https://localhost:8080/#) in <cell line: 3>()
1 from diffusers import DiffusionPipeline
2
----> 3 pipe = DiffusionPipeline.from_pretrained(
4 "DeepFloyd/IF-I-XL-v1.0",
5 text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder
3 frames
[/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in _get_no_split_modules(self, device_map)
1688 if isinstance(module, PreTrainedModel):
1689 if module._no_split_modules is None:
-> 1690 raise ValueError(
1691 f"{module.__class__.__name__} does not support `device_map='{device_map}'`. To implement support, the model "
1692 "class needs to implement the `_no_split_modules` attribute."
ValueError: CLIPVisionModelWithProjection does not support `device_map='auto'`. To implement support, the model class needs to implement the `_no_split_modules` attribute.
and it only worked after I removed the device_map="auto".
The text was updated successfully, but these errors were encountered:
If anyone else runs into this, I fit in pipe = DiffusionPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder unet=None, max_memory={0:"15GB","cpu": "13GB"} )
for that block and it worked. I also ran into a followup bug where I needed to add .to("cuda") to the pipelines as float16 is only compatible with GPU but it points to both GPU and CPU causing failures. On step 1.4, put: pipe = DiffusionPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 ).to("cuda")
and do similarly for other steps; you should be able to generate an image on colab.
Hello, I am trying to run the code on the colab provided. I have not change anything in the code yet.
after I ran the part:
from diffusers import DiffusionPipeline
I got the following error:
and it only worked after I removed the device_map="auto".
The text was updated successfully, but these errors were encountered: