Hackable implementation of state-of-the-art open-source large language models:
- StabilityAI StableLM
- EleutherAI Pythia
- Together RedPajama-INCITE
released under the Apache 2.0 license.
This implementation builds on Lit-LLaMA and nanoGPT, and it's powered by Lightning Fabric ⚡.
Weights can be downloaded following these instructions:
This repository follows the main principle of openness through clarity.
Lit-Parrot is:
- Simple: Single-file implementation without boilerplate.
- Correct: Numerically equivalent to the original model.
- Optimized: Runs on consumer hardware or at scale.
- Open-source: No strings attached.
Avoiding code duplication is not a goal. Readability and hackability are.
Join our Discord to build high-performance, truly open-source models for the common benefit of the community.
Clone the repo
git clone https://github.com/Lightning-AI/lit-parrot
cd lit-parrot
Lit-Parrot currently relies on FlashAttention from PyTorch nightly. Until PyTorch 2.1 is released you'll need to install nightly manually. Luckily that is straightforward:
On CUDA
pip install --index-url https://download.pytorch.org/whl/nightly/cu118 --pre 'torch>=2.1.0dev'
On CPU (incl Macs)
pip install --index-url https://download.pytorch.org/whl/nightly/cpu --pre 'torch>=2.1.0dev'
All good, now install the dependencies:
pip install -r requirements.txt
You are all set! 🎉
To generate text predictions, you need to download the model weights. If you don't have them, check out our guide.
Run inference:
python generate.py --prompt "Hello, my name is"
This will run the 3B pre-trained model and require ~7 GB of GPU memory using the bfloat16
datatype.
Full guide for generating samples from the model.
You can also chat with the model interactively:
python chat.py
Porting from Lit-LLaMA in progress 👷
We provide a simple training script finetune_adapter.py
that instruction-tunes a pretrained model on the Alpaca dataset.
- Download the data and generate an instruction tuning dataset:
python scripts/prepare_alpaca.py
- Run the finetuning script
python finetune_adapter.py
The finetuning requires at least one GPU with ~12 GB memory (GTX 3060). It is expected that you have downloaded the pretrained weights as described above. More details about each finetuning method and how you can apply it to your own data can be found in our technical how-to guides.
These technical tutorials illustrate how to run the finetuning code.
Looking for conceptual tutorials and explanations? We have some additional articles below:
Porting from Lit-LLaMA in progress 👷
We are on a quest towards fully open source AI.
Join us and start contributing, especially on the following areas:
We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.
Unsure about contributing? Check out our Contributing to Lit-LLaMA: A Hitchhiker’s Guide to the Quest for Fully Open-Source AI guide. The same guidelines apply to Lit-Parrot.
Don't forget to join our Discord!
- @karpathy for nanoGPT
- @EleutherAI for GPT-NeoX
- @TimDettmers for bitsandbytes
- @Microsoft for LoRA
- @IST-DASLab for GPTQ
Lit-Parrot is released under the Apache 2.0 license.