Skip to content

Commit

Permalink
Merge branch 'main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
qgallouedec authored Oct 7, 2024
2 parents d9db42a + adf58d8 commit 996face
Show file tree
Hide file tree
Showing 26 changed files with 217 additions and 75 deletions.
31 changes: 14 additions & 17 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ There are several ways you can contribute to TRL:
* Fix outstanding issues with the existing code.
* Submit issues related to bugs or desired new features.
* Implement trainers for new post-training algorithms.
* Contribute to the examples or to the documentation.
* Contribute to the examples or the documentation.

If you don't know where to start, there is a special [Good First
Issue](https://github.com/huggingface/trl/contribute) listing. It will give you a list of
Expand Down Expand Up @@ -74,19 +74,19 @@ If there is a new feature you'd like to see in TRL, please open an issue and des
Whatever it is, we'd love to hear about it!

2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better we'll be able to help you.
3. Provide a *code snippet* that demonstrates the features usage.
3. Provide a *code snippet* that demonstrates the feature's usage.
4. If the feature is related to a paper, please include a link.

If your issue is well written we're already 80% of the way there by the time you create it.

## Do you want to implement a new trainer?

New post-training methods are published on a frequent basis and those which satisfy the following criteria are good candidates to be integrated in TRL:
New post-training methods are published frequently and those that satisfy the following criteria are good candidates to be integrated into TRL:

* **Simplicity:** does the new method achieve similar performance as prior methods, but with less complexity? A good example is Direct Preference Optimization (DPO) [[Rafailov et al, 2023]](https://huggingface.co/papers/2305.18290), which provided a simpler and compelling alternative to RLHF methods.
* **Efficiency:** does the new method provide a significant improvement in training efficiency? A good example is Odds Ratio Preference Optimization (ORPO) [[Hong et al, 2023]](https://huggingface.co/papers/2403.07691), which utilises a similar objective as DPO, but requires half the GPU VRAM.
* **Simplicity:** Does the new method achieve similar performance as prior methods, but with less complexity? A good example is Direct Preference Optimization (DPO) [[Rafailov et al, 2023]](https://huggingface.co/papers/2305.18290), which provided a simpler and compelling alternative to RLHF methods.
* **Efficiency:** Does the new method provide a significant improvement in training efficiency? A good example is Odds Ratio Preference Optimization (ORPO) [[Hong et al, 2023]](https://huggingface.co/papers/2403.07691), which utilizes a similar objective as DPO but requires half the GPU VRAM.

Methods which only provide incremental improvements at the expense of added complexity or compute costs are unlikely to be included in TRL.
Methods that only provide incremental improvements at the expense of added complexity or compute costs are unlikely to be included in TRL.

If you want to implement a trainer for a new post-training method, first open an issue and provide the following information:

Expand All @@ -102,7 +102,7 @@ Based on the community and maintainer feedback, the next step will be to impleme

## Do you want to add documentation?

We're always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved, such as typos, dead links and any missing, unclear or inaccurate content.. We'll be happy to make the changes or help you make a contribution if you're interested!
We're always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved, such as typos, dead links, and any missing, unclear, or inaccurate content... We'll be happy to make the changes or help you contribute if you're interested!

## Submitting a pull request (PR)

Expand Down Expand Up @@ -133,7 +133,7 @@ Follow these steps to start contributing:

3. Create a new branch to hold your development changes, and do this for every new PR you work on.

Start by synchronizing your `main` branch with the `upstream/main` branch (ore details in the [GitHub Docs](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork)):
Start by synchronizing your `main` branch with the `upstream/main` branch (more details in the [GitHub Docs](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork)):

```bash
$ git checkout main
Expand Down Expand Up @@ -204,7 +204,7 @@ Follow these steps to start contributing:
Please write [good commit messages](https://chris.beams.io/posts/git-commit/).

It is a good idea to sync your copy of the code with the original
repository regularly. This way you can quickly account for changes:
Repository regularly. This way you can quickly account for changes:

```bash
$ git fetch upstream
Expand All @@ -221,10 +221,7 @@ Follow these steps to start contributing:
webpage of your fork on GitHub. Click on 'Pull request' to send your changes
to the project maintainers for review.

7. It's ok if maintainers ask you for changes. It happens to core contributors
too! So everyone can see the changes in the Pull request, work in your local
branch and push the changes to your fork. They will automatically appear in
the pull request.
7. It's ok if maintainers ask you for changes. It happens to core contributors too! To ensure everyone can review your changes in the pull request, work on your local branch and push the updates to your fork. They will automatically appear in the pull request.
### Checklist
Expand All @@ -245,14 +242,14 @@ Follow these steps to start contributing:
An extensive test suite is included to test the library behavior and several examples. Library tests can be found in
the [tests folder](https://github.com/huggingface/trl/tree/main/tests).
We use `pytest` in order to run the tests. From the root of the
repository, here's how to run tests with `pytest` for the library:
We use `pytest` to run the tests. From the root of the
repository here's how to run tests with `pytest` for the library:

```bash
$ python -m pytest -sv ./tests
```

In fact, that's how `make test` is implemented (sans the `pip install` line)!
That's how `make test` is implemented (sans the `pip install` line)!
You can specify a smaller set of tests in order to test only the feature
You can specify a smaller set of tests to test only the feature
you're working on.
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,16 +23,16 @@

## What is it?

TRL is a library to post-train LLMs and diffusion models with methods such as Supervised Fine-tuning (SFT), Proximal Policy Optimization (PPO), and Direct Preference Optimization (DPO).
TRL is a library that post-trains LLMs and diffusion models using methods such as Supervised Fine-Tuning (SFT), Proximal Policy Optimization (PPO), and Direct Preference Optimization (DPO).

The library is built on top of [🤗 Transformers](https://github.com/huggingface/transformers) and is compatible with any model architecture available there.


## Highlights

- **`Efficient and scalable`**:
- [🤗 Accelerate](https://github.com/huggingface/accelerate) is the backbone of TRL that model training to scale from a single GPU to a large scale multi-node cluster with methods such as DDP and DeepSpeed.
- [`PEFT`](https://github.com/huggingface/peft) is fully integrated and allows to train even the largest models on modest hardware with quantisation and methods such as LoRA or QLoRA.
- [🤗 Accelerate](https://github.com/huggingface/accelerate) is the backbone of TRL that models training to scale from a single GPU to a large-scale multi-node cluster with methods such as DDP and DeepSpeed.
- [`PEFT`](https://github.com/huggingface/peft) is fully integrated and allows to train even the largest models on modest hardware with quantization and methods such as LoRA or QLoRA.
- [Unsloth](https://github.com/unslothai/unsloth) is also integrated and allows to significantly speed up training with dedicated kernels.
- **`CLI`**: With the [CLI](https://huggingface.co/docs/trl/clis) you can fine-tune and chat with LLMs without writing any code using a single command and a flexible config system.
- **`Trainers`**: The trainer classes are an abstraction to apply many fine-tuning methods with ease such as the [`SFTTrainer`](https://huggingface.co/docs/trl/sft_trainer), [`DPOTrainer`](https://huggingface.co/docs/trl/dpo_trainer), [`RewardTrainer`](https://huggingface.co/docs/trl/reward_trainer), [`PPOTrainer`](https://huggingface.co/docs/trl/ppov2_trainer), and [`ORPOTrainer`](https://huggingface.co/docs/trl/orpo_trainer).
Expand All @@ -51,7 +51,7 @@ pip install trl

### From source

If you want to use the latest features before an official release you can install from source:
If you want to use the latest features before an official release, you can install TRL from source:

```bash
pip install git+https://github.com/huggingface/trl.git
Expand All @@ -67,7 +67,7 @@ git clone https://github.com/huggingface/trl.git

## Command Line Interface (CLI)

You can use TRL Command Line Interface (CLI) to quickly get started with Supervised Fine-tuning (SFT) and Direct Preference Optimization (DPO), or vibe check your model with the chat CLI:
You can use the TRL Command Line Interface (CLI) to quickly get started with Supervised Fine-tuning (SFT) and Direct Preference Optimization (DPO), or vibe check your model with the chat CLI:

**SFT:**

Expand Down Expand Up @@ -178,7 +178,7 @@ trainer.train()

### `DPOTrainer`

`DPOTrainer` implements the popular [Direct Preference Optimization (DPO) algorithm](https://huggingface.co/papers/2305.18290) that was used to post-train Llama 3 and many other models. Here is a basic example on how to use the `DPOTrainer`:
`DPOTrainer` implements the popular [Direct Preference Optimization (DPO) algorithm](https://huggingface.co/papers/2305.18290) that was used to post-train Llama 3 and many other models. Here is a basic example of how to use the `DPOTrainer`:

```python
from datasets import load_dataset
Expand All @@ -195,7 +195,7 @@ trainer.train()

## Development

If you want to contribute to `trl` or customizing it to your needs make sure to read the [contribution guide](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md) and make sure you make a dev install:
If you want to contribute to `trl` or customize it to your needs make sure to read the [contribution guide](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md) and make sure you make a dev install:

```bash
git clone https://github.com/huggingface/trl.git
Expand All @@ -214,4 +214,4 @@ make dev
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
```
26 changes: 14 additions & 12 deletions docs/source/clis.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -96,24 +96,26 @@ python examples/datasets/anthropic_hh.py --push_to_hub --hf_entity your-hf-org

The chat CLI lets you quickly load the model and talk to it. Simply run the following:

```bash
trl chat --model_name_or_path Qwen/Qwen1.5-0.5B-Chat
```
<pre><code>$ trl chat --model_name_or_path Qwen/Qwen1.5-0.5B-Chat
<strong><span style="color: red;">&lt;quentin_gallouedec&gt;:</span></strong>
What is the best programming language?

> [!TIP]
> To use the chat CLI with the developer installation, you must run `make dev`
>
<strong><span style="color: blue;">&lt;Qwen/Qwen1.5-0.5B-Chat&gt;:</span></strong>
There isn't a "best" programming language, as everyone has different style preferences, needs, and preferences. However, some people commonly use
languages like Python, Java, C++, and JavaScript, which are popular among developers for a variety of reasons, including readability, flexibility,
and scalability. Ultimately, it depends on personal preference, needs, and goals.
</code></pre>

Note that the chat interface relies on the tokenizer's [chat template](https://huggingface.co/docs/transformers/chat_templating) to format the inputs for the model. Make sure your tokenizer has a chat template defined.

Besides talking to the model there are a few commands you can use:

- **clear**: clears the current conversation and start a new one
- **example {NAME}**: load example named `{NAME}` from the config and use it as the user input
- **set {SETTING_NAME}={SETTING_VALUE};**: change the system prompt or generation settings (multiple settings are separated by a ';').
- **reset**: same as clear but also resets the generation configs to defaults if they have been changed by **set**
- **save {SAVE_NAME} (optional)**: save the current chat and settings to file by default to `./chat_history/{MODEL_NAME}/chat_{DATETIME}.yaml` or `{SAVE_NAME}` if provided
- **exit**: closes the interface
- `clear`: clears the current conversation and start a new one
- `example {NAME}`: load example named `{NAME}` from the config and use it as the user input
- `set {SETTING_NAME}={SETTING_VALUE};`: change the system prompt or generation settings (multiple settings are separated by a `;`).
- `reset`: same as clear but also resets the generation configs to defaults if they have been changed by `set`
- `save` or `save {SAVE_NAME}`: save the current chat and settings to file by default to `./chat_history/{MODEL_NAME}/chat_{DATETIME}.yaml` or `{SAVE_NAME}` if provided
- `exit`: closes the interface

The default examples are defined in `examples/scripts/config/default_chat_config.yaml` but you can pass your own with `--config CONFIG_FILE` where you can also specify the default generation parameters.

Expand Down
2 changes: 1 addition & 1 deletion examples/scripts/chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ def chat_cli():
user = args.user

model, tokenizer = load_model_and_tokenizer(args)
generation_streamer = TextIteratorStreamer(tokenizer, skip_special_tokens=True)
generation_streamer = TextIteratorStreamer(tokenizer, skip_special_tokens=True, skip_prompt=True)

pad_token_id, eos_token_ids = parse_eos_tokens(tokenizer, args.eos_tokens, args.eos_token_ids)

Expand Down
4 changes: 2 additions & 2 deletions scripts/log_example_reports.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def main(text_file_name, slack_channel_name=None):
"type": "section",
"text": {
"type": "plain_text",
"text": "🔴 Something is wrong with the workflow please check ASAP!"
"text": " Something is wrong with the workflow please check ASAP!"
"Something went wrong there is no text file being produced. Please check ASAP.",
"emoji": True,
},
Expand All @@ -82,7 +82,7 @@ def main(text_file_name, slack_channel_name=None):

for test_name, failed in final_results.items():
failed_table = tabulate(
[[test_name, "🟢" if not failed else "🔴"]],
[[test_name, "" if not failed else ""]],
headers=["Test Name", "Status"],
showindex="always",
tablefmt="grid",
Expand Down
12 changes: 6 additions & 6 deletions tests/slow/test_dpo_slow.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,9 +85,9 @@ def test_dpo_bare_model(self, model_id, loss_type, pre_compute_logits):
model=model,
ref_model=None,
args=training_args,
train_dataset=self.dataset["train"],
eval_dataset=self.dataset["test"],
processing_class=tokenizer,
train_dataset=self.dataset,
eval_dataset=self.dataset,
)

# train the model
Expand Down Expand Up @@ -142,9 +142,9 @@ def test_dpo_peft_model(self, model_id, loss_type, pre_compute_logits, gradient_
model=model,
ref_model=None,
args=training_args,
train_dataset=self.dataset["train"],
eval_dataset=self.dataset["test"],
processing_class=tokenizer,
train_dataset=self.dataset,
eval_dataset=self.dataset,
peft_config=self.peft_config,
)

Expand Down Expand Up @@ -206,9 +206,9 @@ def test_dpo_peft_model_qlora(self, model_id, loss_type, pre_compute_logits, gra
model=model,
ref_model=None,
args=training_args,
train_dataset=self.dataset["train"],
eval_dataset=self.dataset["test"],
processing_class=tokenizer,
train_dataset=self.dataset,
eval_dataset=self.dataset,
peft_config=self.peft_config,
)

Expand Down
47 changes: 47 additions & 0 deletions tests/test_rloo_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,14 @@
# limitations under the License.
import platform
import subprocess
import tempfile
import unittest

import torch
from datasets import Dataset
from transformers import AutoModelForCausalLM, AutoModelForSequenceClassification, AutoTokenizer

from trl import RLOOConfig, RLOOTrainer


def test():
Expand All @@ -26,6 +32,8 @@ def test():
--gradient_accumulation_steps 1 \
--total_episodes 10 \
--model_name_or_path EleutherAI/pythia-14m \
--sft_model_path EleutherAI/pythia-14m \
--reward_model_path EleutherAI/pythia-14m \
--missing_eos_penalty 1.0 \
--save_strategy no \
--stop_token eos
Expand Down Expand Up @@ -71,3 +79,42 @@ def test_rloo_reward():
baseline = (rlhf_reward.sum(0) - rlhf_reward) / (rloo_k - 1)
vec_advantages = rlhf_reward - baseline
torch.testing.assert_close(vec_advantages.flatten(), advantages)


class RLOOTrainerTester(unittest.TestCase):
def setUp(self):
self.sft_model_id = "trl-internal-testing/dummy-GPT2-correct-vocab"
self.reward_model_id = "trl-internal-testing/dummy-GPT2-correct-vocab"

self.policy_model = AutoModelForCausalLM.from_pretrained(self.sft_model_id)
self.reward_model = AutoModelForSequenceClassification.from_pretrained(self.reward_model_id)
self.policy_ref_model = AutoModelForCausalLM.from_pretrained(self.sft_model_id)

self.tokenizer = AutoTokenizer.from_pretrained(self.sft_model_id, padding_side="left")
self.tokenizer.chat_template = "{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}"
self.tokenizer.add_special_tokens({"pad_token": "[PAD]"})

def test_rloo_checkpoint(self):
with tempfile.TemporaryDirectory() as tmp_dir:
training_args = RLOOConfig(
output_dir=tmp_dir,
per_device_train_batch_size=2,
total_episodes=1,
report_to="none",
)

dummy_text = {"content": "Hello World!", "role": "user"}
dummy_data = self.tokenizer.apply_chat_template(dummy_text)
dummy_dataset = Dataset.from_dict({"input_ids": dummy_data})

trainer = RLOOTrainer(
config=training_args,
policy=self.policy_model,
reward_model=self.reward_model,
ref_policy=self.policy_ref_model,
processing_class=self.tokenizer,
train_dataset=dummy_dataset,
eval_dataset=dummy_dataset,
)

trainer._save_checkpoint(trainer.model, trial=None)
2 changes: 1 addition & 1 deletion tests/test_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ def test_val_none(self):
model_name="my_model",
hub_model_id="username/my_hub_model",
dataset_name=None,
tags=None,
tags=[],
wandb_url=None,
trainer_name="My Trainer",
trainer_citation=None,
Expand Down
1 change: 1 addition & 0 deletions trl/commands/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,7 @@ def train(command_name):
encoding="utf-8",
cwd=os.getcwd(),
env=os.environ.copy(),
capture_output=True,
)
except (CalledProcessError, ChildProcessError) as exc:
console.log(f"TRL - {command_name.upper()} failed on ! See the logs above for further details.")
Expand Down
7 changes: 7 additions & 0 deletions trl/trainer/alignprop_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -415,6 +415,13 @@ def create_model_card(
else:
base_model = None

tags = tags or []
if isinstance(tags, str):
tags = [tags]

if hasattr(self.model.config, "unsloth_version"):
tags.append("unsloth")

citation = textwrap.dedent("""\
@article{prabhudesai2024aligning,
title = {{Aligning Text-to-Image Diffusion Models with Reward Backpropagation}},
Expand Down
Loading

0 comments on commit 996face

Please sign in to comment.