Replies: 1 comment
-
https://github.com/Prsaro/lora_dualnetwork/ |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am facing a memory issue (on A100 GPU) when trying to train multiple LoRAs at the same time. The GPU memory gets maxed out in three instances of the script running at the same time. I am wondering if there is a way where multiple LoRAs are being trained on while using a single model as a base (could this save memory? I believe it would).
I am wondering if anyone has experience with this technique and could guide me on how to implement this for the code in the repo. Specifically, I need help with modifying the training code to create multiple instances of the LoRA module that share the same weights.
I am not sure whether this is possible or if it already exists in the GitHub repo. Any guidance would be much appreciated.
PS: I am a bit of a noob when it comes to this, so please bear with me if my question seems a bit vague or poorly formed. Any guidance or advice would be much appreciated.
Beta Was this translation helpful? Give feedback.
All reactions