Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Parent directory ./Video-ChatGPT_7B-1.1_Checkpoints/checkpoint-3000 does not exist. #106

Open
tianguang2525 opened this issue May 24, 2024 · 5 comments

Comments

@tianguang2525
Copy link

When the steps run tosave_steps, the above error will be reported

@tianguang2525
Copy link
Author

I see mm_mlp_adapter is saved in /home/develop/fyy/Video-ChatGPT-main/Video-ChatGPT_7B-1.1_Checkpoints/mm_projector, is there anything need to save? or I download the wrong package

@HaotianLiu123
Copy link

I also have the same problem.

@mmaaz60
Copy link
Member

mmaaz60 commented Jun 28, 2024

Hi @tianguang2525 @HaotianLiu123,

Thank you for your interest in our work and apologies for the late reply. Were you able to solve the issue?

If not, please provide some more information regarding the issue, such as, what command you are running and where exactly the error is coming. This information would be helpful to reproduce the error on my side and provide further help. Thanks.

@rjccv
Copy link

rjccv commented Jul 19, 2024

I had the same error.

27%|██▋       | 3000/11214 [4:55:11<13:22:58,  5.87s/it]Traceback (most recent call last):

  File "/home/usr/Video-ChatGPT/video_chatgpt/train/train_mem.py", line 9, in <module>
    train()
  File "/home/usr/Video-ChatGPT/video_chatgpt/train/train.py", line 828, in train
    trainer.train()
  File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/transformers/trainer.py", line 1932, in train
    return inner_training_loop(
  File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/transformers/trainer.py", line 2345, in _inner_training_loop
    self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
  File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/transformers/trainer.py", line 2796, in _maybe_log_save_evaluate
    self._save_checkpoint(model, trial, metrics=metrics)
  File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/transformers/trainer.py", line 2879, in _save_checkpoint
    self._save_optimizer_and_scheduler(output_dir)
  File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/transformers/trainer.py", line 2995, in _save_optimizer_and_scheduler
    torch.save(self.optimizer.state_dict(), os.path.join(output_dir, OPTIMIZER_NAME))
  File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/torch/serialization.py", line 627, in save
    with _open_zipfile_writer(f) as opened_zipfile:
  File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/torch/serialization.py", line 501, in _open_zipfile_writer
    return container(name_or_buffer)
  File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/torch/serialization.py", line 472, in __init__
    super().__init__(torch._C.PyTorchFileWriter(self.name))
RuntimeError: Parent directory ./Video-ChatGPT_7B-1.1_Checkpoints_Vids/checkpoint-3000 does not exist.
 27%|██▋       | 3000/11214 [4:55:15<13:28:24,  5.91s/it]

I am not using the transformers version which is suggested, but rather transformers==4.42.3. I wonder if that could be causing this issue.

@rjccv
Copy link

rjccv commented Jul 20, 2024

I have started a fresh environment with the original package versions from requirements.txt and still experience the same issue. This is the script I use for training.

export PYTHONPATH="./:$PYTHONPATH"
python video_chatgpt/train/train_mem.py \
--model_name_or_path /home/usr/Video-ChatGPT/LLaVA-7B-Lightening-v1-1 \
--version v1 \
--data_path /home/usr/Video-ChatGPT/qa_video.json \
--video_folder /home/usr/pkls \
--tune_mm_mlp_adapter True \
--mm_use_img_start_end \
--lazy_preprocess True \
--bf16 True \
--output_dir /home/usr/Video-ChatGPT/Video-ChatGPT_7B-1.1_Checkpoints_Vids_Start_End \
--num_train_epochs 3 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 3000 \
--save_total_limit 3 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 100 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants