-
create-preference-data-ollama.ipynb: A notebook that creates a synthetic dataset for preference finetuning dataset using Llama 3.1 and Ollama
-
dpo-from-scratch.ipynb: This notebook implements Direct Preference Optimization (DPO) for LLM alignment
04_preference-tuning-with-dpo
Folders and files
Name | Name | Last commit date | ||
---|---|---|---|---|
parent directory.. | ||||