Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Mistakes in the Dataset #23

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open

Conversation

marcusm117
Copy link

@marcusm117 marcusm117 commented May 11, 2023

Dear HumanEval Maintainers,

Thank you so much for sharing this awesome Test Set!

I fully understand that due to the nature of a Test Set, we want to keep it unchanged as much as possible. However, during our usage, a few mistakes were found in some prompts, canonical solutions, and test cases. (some were also raised in previous issues https://github.com/openai/human-eval/issues).

These mistakes indeed affect the ability of HumanEval to accurately reflect the performance of a Code Generation Model. Therefore, here I'd love to propose an enhanced version of HumanEval, which fixes these known mistakes.

The changes made to the original repo:

  1. Add file human-eval-enhanced-202307.jsonl.gz to folder \data. This file is the compressed fixed dataset including the following 14 changes. Details about the mistakes and changes are documented in another file tests.py in the folder \data.
  • fix HumanEval_32, fix typo in prompt
  • fix HumanEval_38, fix no example in prompt
  • fix HumanEval_41, fix no example in prompt
  • fix HumanEval_47, fix wrong example in prompt
  • fix HumanEval_50, fix no example & ambiguous prompt
  • fix HumanEval_57, fix ambiguous prompt & typo
  • fix HumanEval_64, fix unnecessary statement in prompt
  • fix HumanEval_67, fix typo in prompt
  • fix HumanEval_75, fix wrong prompt
  • fix HumanEval_83, fix no example in prompt
  • fix HumanEval_95, fix wrong canonical solution & incomplete test cases
  • fix HumanEval_116, fix wrong prompt and wrong examples in prompt
  • fix HumanEval_163, fix wrong canonical solution & wrong test cases
  • remove unnecessary leading spaces in prompts
  1. Add file tests.py in the folder \data. This file includes tests for the changes in human-eval-enhanced-202307.jsonl, and also details about the mistakes in the original data set human-eval-v2-20210705.jsonl. The tests can be run as a Script, using the Command python tests.py, or they can be run by pytest, following the detailed instructions at the top of tests.py.

  2. Add file .gitignore to the root directory. This file includes common files to ignore when building a Python project, especially .pytest_cache and __pycache__ since tests.py can be run by pytest. This ".gitignore" file is not really important and can be optionally removed from this PR.

Thanks for your time reviewing this PR. Any feedback would be much appreciated : )

[UPDATE] So sorry for not using compressed files to avoid data leakage in the first place, it's an honest mistake. It's fixed now in this PR and there'll be no leakage after it's Squash-and-Merged. However, uncompressed files are still in some other closed accidental PR history. I can reach out to GitHub support to delete them if necessary.

Sincerely,

marcusm117

* fix HumanEval_32, fix typo in prompt

* fix HumanEval_38, fix no example in prompt

* fix HumanEval_41, fix no example in prompt

* fix HumanEval_47, fix wrong example in prompt

* fix HumanEval_50, fix no example & ambiguous prompt

* fix HumanEval_57, fix ambiguous prompt & typo

* fix HumanEval_67, fix typo in prompt

* fix HumanEval_83, fix no example in prompt

* fix HumanEval_95, fix wrong canonical solution & incomplete test cases

* fix HumanEval_163, fix wrong canonical solution & wrong test cases
@marcusm117 marcusm117 changed the title Fix Mistakes in the Data Set (#4) Fix Mistakes in the Data Set May 11, 2023
* fix HumanEval_75, fix wrong prompt

* fix HumanEval_116, fix wrong prompt and wrong examples in prompt
* fix HumanEval_64, fix unnecessary statement in prompt

* remove unnecessary leading spaces in prompt
@kolergy
Copy link

kolergy commented Jan 24, 2024

Those fixes makes a lot of sense, I'm surprised it was not merged

@marcusm117 marcusm117 changed the title Fix Mistakes in the Data Set Fix Mistakes in the Dataset Jun 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants