-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix RLOO checkpointing #2114
Fix RLOO checkpointing #2114
Conversation
Nice, thanks @bartoszzuk. Without your fix, does it cause any error when running RLOO? |
make sure to run |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@qgallouedec Yes when using
Because the Sorry, totally forgot about |
Thanks. I'm not sure to understand why this failing mode doesn't break our CI |
I am not sure if this is related, but I have observed a strange behaviour in RLOO checkpointing. For example, I have set it to checkpoint every 500 steps and it follows that for some time, but after a while it starts generating checkpoints every 2 steps. Is this an intended functionality? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @bartoszzuk thanks for the fix! Would you mind writing a regression test in test_rloo_trainer.py
that fails on main
but passes on your branch? That would help ensure future code changes don't accidentally introduce the bug again
Hey @lewtun, sorry for late response. I added a simple regression test for RLOO checkpointing. Hopefully it somewhat follows the conventions found in other tests (let me know If any improvements are required). The test should:
I also changed the |
✅
✅
✅ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks a lot @bartoszzuk, I'll merge as soon as the CI passes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the fix and regression test @bartoszzuk! I think the CI is failing for some unrelated reason, so I've rerun it to be sure
Yep, not related |
* Fix RLOO checkpointing for transformers>=4.45.0 * Add missing import * Fix pre-commit issues * Added test for RLOO checkpointing * Ensure that tokenizer matches SFT and Reward model * Pre-commit formatting * processing class --------- Co-authored-by: Kashif Rasul <[email protected]> Co-authored-by: Quentin Gallouédec <[email protected]> Co-authored-by: Quentin Gallouédec <[email protected]>
This PR fixes RLOO checkpointing (in the same way as a recent fix for PPOv2 PR #2080).
This is needed after changes to
_save_checkpoint
method introduced intransformers v4.45.0.dev
. Specifically we will getKeyError: 'TrainerControl'
during saving of the trainer state (here is the exact line causing the issue). By passingstateful_callbacks
to OnlineTrainerState explicitly the TrainerControl object is stored and can be properly accessed in_save_checkpoint
.