Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: No lora injected. #273

Open
moeputt opened this issue Aug 17, 2024 · 1 comment
Open

ValueError: No lora injected. #273

moeputt opened this issue Aug 17, 2024 · 1 comment

Comments

@moeputt
Copy link

moeputt commented Aug 17, 2024

Hello,
When I use the "default" LoRA-pti fine-tuning script from the repo home-page, I keep getting the following error:

       grad_fn=<SliceBackward0>)
Current Learned Embeddings for <s2>:, id 49409  tensor([-0.0058, -0.0201, -0.0201, -0.0131], device='cuda:0',
       grad_fn=<SliceBackward0>)
Traceback (most recent call last):
  File "/opt/conda/bin/lora_pti", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/cli_lora_pti.py", line 1040, in main
    fire.Fire(train)
  File "/opt/conda/lib/python3.10/site-packages/fire/core.py", line 143, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/opt/conda/lib/python3.10/site-packages/fire/core.py", line 477, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/opt/conda/lib/python3.10/site-packages/fire/core.py", line 693, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/cli_lora_pti.py", line 1012, in train
    perform_tuning(
  File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/cli_lora_pti.py", line 685, in perform_tuning
    save_all(
  File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/lora.py", line 1110, in save_all
    save_safeloras_with_embeds(loras, embeds, save_path)
  File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/lora.py", line 470, in save_safeloras_with_embeds
    extract_lora_as_tensor(model, target_replace_module)
  File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/lora.py", line 419, in extract_lora_as_tensor
    raise ValueError("No lora injected.")
ValueError: No lora injected.
Steps: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00,  4.77it/s, loss=0.0129, lr=0]

Does anybody know what could be causing this?

For reference, here is the training script I'm using:

export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export INSTANCE_DIR="./data/mountains"
export OUTPUT_DIR="./exps/output_dsn"

lora_pti \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --output_dir=$OUTPUT_DIR \
  --train_text_encoder \
  --resolution=512 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=4 \
  --scale_lr \
  --learning_rate_unet=1e-4 \
  --learning_rate_text=1e-5 \
  --learning_rate_ti=5e-4 \
  --color_jitter \
  --lr_scheduler="linear" \
  --lr_warmup_steps=0 \
  --placeholder_tokens="<s1>|<s2>" \
  --use_template="style"\
  --save_steps=100 \
  --max_train_steps_ti=10 \
  --max_train_steps_tuning=10 \
  --perform_inversion=True \
  --clip_ti_decay \
  --weight_decay_ti=0.000 \
  --weight_decay_lora=0.001\
  --continue_inversion \
  --continue_inversion_lr=1e-4 \
  --device="cuda" \
  --lora_rank=1 \
  --use_lora\
#  --use_face_segmentation_condition\

For reference I've tried running this + reinstalling on two completely separate machines, each with 24GB of GPU RAM.
Thanks!

@ChrisRaynoor
Copy link

I also encountered the same issue with transformers==4.44.1.

I suspect the problem arises because the newer version of transformers has changed CLIPAttention to CLIPSdpaAttention, causing LoRA to be unable to locate and modify this part.

There are two ways to resolve this:

  1. Add the parameter --lora_clip_target_modules="{'CLIPSdpaAttention'}" to your training script.
  2. Downgrade transformers to version 4.25.1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants