You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, thank you for sharing this. I think it's a function sorely lacking in auto1111/Forge.
It's not yet working for me though. I would appreciate your advice. I get the following error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
I'm using 4090 and 4080, in my ideal world I'd be able to use the VAE, all clips, and loras on the 4080. I'm not sure if the loras even can be on a different device, but I miss having VAE/clips on CUDA:1.
EDIT: It seems to work properly with T5 fp16. The problem is when using a GGUF quant of T5. Which is not as crucial now that I don't need to load it on the same GPU as the UNET.
The text was updated successfully, but these errors were encountered:
First, thank you for sharing this. I think it's a function sorely lacking in auto1111/Forge.
It's not yet working for me though. I would appreciate your advice. I get the following error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
I'm using 4090 and 4080, in my ideal world I'd be able to use the VAE, all clips, and loras on the 4080. I'm not sure if the loras even can be on a different device, but I miss having VAE/clips on CUDA:1.
EDIT: It seems to work properly with T5 fp16. The problem is when using a GGUF quant of T5. Which is not as crucial now that I don't need to load it on the same GPU as the UNET.
The text was updated successfully, but these errors were encountered: