Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Masked BERT model from keras_nlp produces weird results #1963

Open
vctrs opened this issue Oct 30, 2024 · 1 comment
Open

Masked BERT model from keras_nlp produces weird results #1963

vctrs opened this issue Oct 30, 2024 · 1 comment
Assignees
Labels
type:Bug Something isn't working

Comments

@vctrs
Copy link

vctrs commented Oct 30, 2024

Description

I am trying to use keras-nlp with a pretrained masked BERT model to predict some tokens in a sequence. However the model produces inconsistent results. Am I doing something wrong?

To Reproduce

https://colab.research.google.com/drive/1kL5oXveRen9Zub_7kJCI28Gg34GfDt_x

Expected behavior

Tokens in predictions should match tokens in labels. The preprocessor masks two tokens, and first two tokens of predictions probably should match labels.

@mehtamansi29 mehtamansi29 added the type:Bug Something isn't working label Nov 7, 2024
@mehtamansi29
Copy link
Collaborator

Hi @vctrs -

Thanks for reporting the issue. I have tested the code snippet and reproduces the reported behaviour.
Attached gist file for reference.

We will look into the issue and update you the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:Bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants