You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the huggingface tokenizer implementation, they use '<|endoftext|>' for all these special tokens. Is there any reason to use other tokens in the vocab as special tokens? What if these tokens appear in the dataset after bpe encoding?
Thanks
The text was updated successfully, but these errors were encountered:
Hi, according to the
preprocess.py
file, you choose the special tokens as follows,In the huggingface tokenizer implementation, they use
'<|endoftext|>'
for all these special tokens. Is there any reason to use other tokens in the vocab as special tokens? What if these tokens appear in the dataset after bpe encoding?Thanks
The text was updated successfully, but these errors were encountered: