Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for max_length in run_generation #472

Open
4 tasks
ankurneog opened this issue Oct 18, 2023 · 3 comments
Open
4 tasks

Add support for max_length in run_generation #472

ankurneog opened this issue Oct 18, 2023 · 3 comments
Labels
bug Something isn't working good first issue Good for newcomers

Comments

@ankurneog
Copy link
Contributor

System Info

This is caught while executing the transformers unit tests for optimum-habana 
https://github.com/huggingface/optimum-habana/blob/main/tests/transformers/tests/models/gpt2/test_modeling_gpt2.py

several TCs are failing because the config in the tests is updated with max_length for text generation rather than max_new_tokens. 

Hence text generation is failing for decoder only models due to this check : 
            if not self.config.is_encoder_decoder:
                # only pad if bucket_size < -1. If we are bucketing (bucket_size > 0), then that is taken care in greedy_search()
                if not is_greedy_and_bucket:
                    # token_idx is the current index in the generation process, it is incremented each time a new token is generated
                    model_kwargs["token_idx"] = torch.tensor(inputs_tensor.shape[-1], device=inputs_tensor.device)
>                   inputs_tensor = torch.nn.functional.pad(
                        inputs_tensor, (0, generation_config.max_new_tokens), value=generation_config.pad_token_id
                    )
E                   TypeError: pad(): argument 'pad' must be tuple of ints, but found element of type NoneType at pos 2

max_new_tokens is 0 


FAILED test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate - TypeError: pad(): argument 'pad' must be tuple of ints, but found element of type NoneType at pos 2

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

python -m pytest -vs test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate

Expected behavior

test should pass

@ankurneog ankurneog added the bug Something isn't working label Oct 18, 2023
@ankurneog
Copy link
Contributor Author

fyi : @regisss @ssarkar2

@ssarkar2
Copy link
Collaborator

Preliminary look shows max_new_tokens is None. run_generation.py was tested with max_new_tokens but not max_length, which are two mutually exclusive ways of specifying generation length as mentioned here. Test is called from here, which is using max_length instead of the more tested max_new_tokens

@ssarkar2
Copy link
Collaborator

#476

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants