Skip to content

phi-1_5 Training memory usage question #658

Answered by mattma1970
Layoric asked this question in Q&A
Discussion options

You must be logged in to vote

I've been looking into memory usage when finetuning Phi1.5 QLoRA. Memory usage scale super-linearly with sequence length and batch size. I can finetune it with sequence length 1500 and micro-batchsize of 1 and I hit around 12.6G vram usage on an RTX 4090. A batchsize of 2 uses 22.6GB.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@hungphongtran-pixta
Comment options

Answer selected by Layoric
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants