You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
When I use the "default" LoRA-pti fine-tuning script from the repo home-page, I keep getting the following error:
grad_fn=<SliceBackward0>)
Current Learned Embeddings for <s2>:, id 49409 tensor([-0.0058, -0.0201, -0.0201, -0.0131], device='cuda:0',
grad_fn=<SliceBackward0>)
Traceback (most recent call last):
File "/opt/conda/bin/lora_pti", line 8, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/cli_lora_pti.py", line 1040, in main
fire.Fire(train)
File "/opt/conda/lib/python3.10/site-packages/fire/core.py", line 143, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/opt/conda/lib/python3.10/site-packages/fire/core.py", line 477, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/opt/conda/lib/python3.10/site-packages/fire/core.py", line 693, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/cli_lora_pti.py", line 1012, in train
perform_tuning(
File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/cli_lora_pti.py", line 685, in perform_tuning
save_all(
File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/lora.py", line 1110, in save_all
save_safeloras_with_embeds(loras, embeds, save_path)
File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/lora.py", line 470, in save_safeloras_with_embeds
extract_lora_as_tensor(model, target_replace_module)
File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/lora.py", line 419, in extract_lora_as_tensor
raise ValueError("No lora injected.")
ValueError: No lora injected.
Steps: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 4.77it/s, loss=0.0129, lr=0]
Does anybody know what could be causing this?
For reference, here is the training script I'm using:
I also encountered the same issue with transformers==4.44.1.
I suspect the problem arises because the newer version of transformers has changed CLIPAttention to CLIPSdpaAttention, causing LoRA to be unable to locate and modify this part.
There are two ways to resolve this:
Add the parameter --lora_clip_target_modules="{'CLIPSdpaAttention'}" to your training script.
Hello,
When I use the "default" LoRA-pti fine-tuning script from the repo home-page, I keep getting the following error:
Does anybody know what could be causing this?
For reference, here is the training script I'm using:
For reference I've tried running this + reinstalling on two completely separate machines, each with 24GB of GPU RAM.
Thanks!
The text was updated successfully, but these errors were encountered: