-
Notifications
You must be signed in to change notification settings - Fork 516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Looking for LLaMA 2? #452
Comments
Hi @carmocca, |
Yes, full finetuning is supported via finetune/full.py script given a Llama 2 model provided via the You can also use a custom dataset given that you prepare it in the right format. You can see the |
Is this repo still intended to be supported given since it seems like the lit-gpt repo supports more and newer models. |
Meta AI has since released LLaMA 2. Additionally, new Apache 2.0 licensed weights are being released as part of the Open LLaMA project.
To run LLaMA 2 weights, Open LLaMA weights, or Vicuna weights (among other LLaMA-like checkpoints), check out the Lit-GPT repository.
The text was updated successfully, but these errors were encountered: