Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pdb: loss reference before assignment #15

Open
Nuveyla opened this issue May 3, 2022 · 8 comments
Open

pdb: loss reference before assignment #15

Nuveyla opened this issue May 3, 2022 · 8 comments

Comments

@Nuveyla
Copy link

Nuveyla commented May 3, 2022

Hey,

When running: python train.py --config config/wnut17_doc_cl_kl.yaml, with the original code (only change in paths) I run into an error that the loss is referenced before assignment. See the following screenshot:

image

The given TypeError causes this issue. I have tried the option to add is_split_into_words=True into line 3171 in embeddings.py. This gave a new error:
image
with again same result (no assignment of loss). What can be the cause of this?

@wangxinyu0922
Copy link
Member

What is the version of your transformers? Our code needs transformers==3.0.0 for running. It seems that your transformers have a higher version.

(Not suggested) If you want to run the code with a higher version of transformers, you need to modify the tokenizer settings in this line of flair/embeddings.py into:

self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)

@manzoorali29
Copy link

Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks

@wangxinyu0922
Copy link
Member

Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks

Hi, have you tried to modify the tokenizer settings in you need to modify the tokenizer settings in this line of flair/embeddings.py into:

self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)

@manzoorali29
Copy link

Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks

Hi, have you tried to modify the tokenizer settings in you need to modify the tokenizer settings in this line of flair/embeddings.py into:

self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)

Yes, I did that. But still the same error.

@wangxinyu0922
Copy link
Member

Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks

Hi, have you tried to modify the tokenizer settings in you need to modify the tokenizer settings in this line of flair/embeddings.py into:

self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)

Yes, I did that. But still the same error.

Can you post the screenshot of the error?

@manzoorali29
Copy link

image

@wangxinyu0922
Copy link
Member

image

I'm not sure for the reason. I install a new environment based on requirements.txt and I cannot reproduce the error. It seems that there is something wrong with the input batch. Maybe you can use pdb to find out what happened in the code.

Moreover, I find that Device: cpu in your log. Currently, our code does not support running without GPU. Maybe this is the reason for the error.

@Chenfeng1271
Copy link

image

Hi, I meet this issue too. Three possible reasons may contribute to it:

1 transformers version need to be 3.0.0

2 torch must use GPU version

3 incompatible GPU and Cuda. In this case, you could still pass torch.cuda.is_available() but meet CUDA error: no kernel image is available for execution on the device. It is caused by higher torch version with GPU. For example, when I use Tesla K40 with torch1.7.0 cuda10.1, it would raise this issue, but degrading the torch to 1.3.0 would solve it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants