This project demonstrates the process of fine-tuning the Starcoder2-3B model, a code-generating LLM, on proprietary code, which we could imagine as a company's internal codebase, to better align with internal coding standards and leverage specialized libraries. Given the substantial size of these models, traditional fine-tuning approaches can be excessively demanding on computational resources. However, we'll introduce techniques to effectively fine-tune these models on just a single GPU using QLoRA, PEFT, and Bits and Bytes, ensuring a more practical approach for resource-limited environments.
-
Notifications
You must be signed in to change notification settings - Fork 1
Finetuning Starcoder2-3B for Code Completion on a single A100 GPU
License
jordandeklerk/Starcoder2-Finetune-Code-Completion
About
Finetuning Starcoder2-3B for Code Completion on a single A100 GPU
Topics
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published