You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
what if I want to use scibert embedding in my model is it enough just to replace this code :
`from allennlp.data.token_indexers import PretrainedBertIndexer
Hi!
what if I want to use scibert embedding in my model is it enough just to replace this code :
`from allennlp.data.token_indexers import PretrainedBertIndexer
token_indexer = PretrainedBertIndexer(
pretrained_model="bert-base-uncased",
max_pieces=config.max_seq_len,
do_lowercase=True,
)
def tokenizer(s: str):
return token_indexer.wordpiece_tokenizer(s)[:config.max_seq_len - 2]`
by this code
` from allennlp.data.token_indexers import PretrainedBertIndexer
token_indexer = PretrainedBertIndexer(
pretrained_model="scibert-scivocab-uncased",
max_pieces=config.max_seq_len,
do_lowercase=True,
)
def tokenizer(s: str):
return token_indexer.wordpiece_tokenizer(s)[:config.max_seq_len - 2]`
The text was updated successfully, but these errors were encountered: