Skip to content

Commit

Permalink
Setting torch_dtype to bfloat16
Browse files Browse the repository at this point in the history
  • Loading branch information
glerzing committed Jul 22, 2023
1 parent a21a47a commit c40cdc4
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions examples/ppo_sentiments_8bit.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ def main(hparams={}):
# Set the model loading in 8 bits
config.model.from_pretrained_kwargs = {
"load_in_8bit": True,
"torch_dtype": torch.bfloat16,
"device_map": "auto",
}

Expand Down

0 comments on commit c40cdc4

Please sign in to comment.