-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] MLM Training Objective #680
base: main
Are you sure you want to change the base?
Conversation
src/levanter/data/text.py
Outdated
@@ -64,6 +65,65 @@ | |||
|
|||
DEFAULT_IGNORE_INDEX = -100 # Mirrors pytorch's default ignore index | |||
|
|||
class MaskedLmDataset(ShardableDataset[LmExample]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fyi we're gonna do a big refactor on datasets soon, but I'll either handle the refactor or guide you through it)
src/levanter/data/text.py
Outdated
def _create_mlm_example(tokens, key): | ||
tokens_array = tokens.array | ||
|
||
example = LmExample.causal(tokens=tokens, ignore_id=self.ignore_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you need a non-causal attention mask for Roberta, and you need to set a loss_mask to be only the masked tokens
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you also can't use the current LmExample actually because you need a separate targets field (with the non-masked tokens). With more work you could avoid the need for targets (with just masked tokens), but probably better to add an targets: Optional[NamedArray]
to the class (or make your own class)
Introduces train_mlm.py, a new file adapted from train_lm.py, to support masked language modeling with dynamic masking as utilized in RoBERTa. A new class, MaskedLMDataset, has been implemented in text.py to handle dynamic masking. This class is instantiated and utilized within train_mlm.py, preserving all structural and sharding-related comments from the original train_lm.py to maintain clarity and continuity. The integration of MaskedLMDataset with the training script has been verified with appropriate parameters to ensure consistency with existing training workflows.