-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermittent IndexError in pytest-cov during Apache Beam Unit Tests: INTERNALERROR> IndexError: bytearray index out of range
#607
Comments
INTERNALERROR> IndexError: bytearray index out of range
Do you have a link to code we can run to reproduce the error? |
Unfortuantey its a private repo for work and I havent been able to reproduce the error with toy tests. If Im able to Ill post the code here. Do you have any thoughts on things I can look into or where the error might originate from, if youve seen something like this before? |
This error will happen if |
I am a contributor on the same project - the nums seems to have a lot of large negative values, which are also inconsistent between separate test runs (which is why this problem only occurs sometimes, I presume - by chance negative numbers may not occur). Here are examples from two different test runs: |
@Al-Minkin do the positive numbers correspond to executable line numbers in your source file? Also, if you negate the negative numbers, do they correspond to lines in your source file? BTW: I was wrong, this isn't because you have values less than -8, it's because you have negative numbers larger (in absolute value) than your largest positive number. BUT: you shouldn't have any negative numbers in the first place. Can you add |
I don't think we have a separate coverage run line - we run coverage as a module as part of pytest. I tried adding it to pytest itself, but the output wasn't meaningfully different to my eyes. We have a coveragerc config that we supply to it - what would I add to it to achieve the same result? As for line numbers - it's hard for me to say what they correspond to because I am not sure what that line count represents. Is that cumulative line counts per test, total line counts in the file, individual line numbers that have been executed so far, or something else? |
You can add this to your coveragerc file: [run]
debug = sys The set of numbers that you are seeing are not line counts. They are line numbers, which is why they should not be negative. The error is happening where coverage is trying to record the set of line numbers that have been executed. It would be useful to know how the line numbers you are seeing correspond to the contents of the file. |
I've looked into this issue further and I think I solved it on our end, but an upstream solution may be preferrable on your side, since I still do not really know the true cause of the error. Here are the results of my investigation: First of all, I re-ran the test suite about 50 times, and logged all linenos passed to the nums_to_numbits function. Then I've compared them from one test run to the other. For all files except a file we'll call test_utils.py, the set of line numbers is exactly the same from one test suite run to the other. For test_utils.py, the situation is a bit more complex. The set of linenos can be split into two subsets - let's call them "true lines" and "ghost lines". True lines correspond to the code lines in test_utils.py, are consistent from test run to test run, and include almost every code line that is expected to be covered except two (see below). Ghost lines seem to be random numbers from about -300 to 300 (the range is not exact; test_utils.py is only about 100 lines long), correspond to nothing, and if by chance a ghost line happens to be negative enough, the coverage collection fails entirely. The key part of the file (as well as the change I have made to resolve the problem) looks like this:
After this change was made, all ghost lines vanished from the logs and the test suite coverage had stopped crashing. The reason why this change had worked is not very clear to me, but I suspect it has something to do with the way beam pickles functions or executes matchers, which may mess with line coverage somehow. Hence, I think an upstream solution may be necessary. |
I don't see why those changes would have affected this behavior, but I can't understand how the negative line numbers happen in the first place. Is there any way you can give me a way to run your code? We can talk privately in email if needed: [email protected]. Can you create a different Apache Beam project that also demonstrates the problem? |
Sadly I can't justify committing more company time to this problem, especially since it no longer breaks our builds. I just wanted to document my findings in case someone else has a similar problem in the future. |
Summary
I get this trackback when running python pytest unit tests using the Apache Beam testing framework.
This is an intermittent issue. It often resolves if I rerun the test suite 1-3 times.
Based on the trackeback, it is related to the pytest_cov/plugin.py and related modules: The traceback shows that the error originates in the pytest_cov plugin, which is responsible for coverage reporting during the test run. This plugin interacts with the coverage library to collect and report coverage data. The error occurs during the flushing of coverage data, when coverage data collected during the test execution is being processed and saved.
All tests in the pytest output will be marked as passed. This issue occurs regardless of passing or failing tests.
I only get this issue on my test suite with Apache Beam. It does not happen on a different test suite with the same pytest/coverage versions installed, but no Beam tests/Beam package is installed. I am using a DirectRunner for Beam, and tests are running sequentially (i.e. no parallelism).
Expected vs actual result
The unit tests should run successfully without encountering the IndexError mentioned above. The coverage reporting process should handle data flushing reliably, regardless of the code paths being executed during the tests.
Reproducer
Versions
Python: 3.10
pytest-cov: 4.1.0
coverage: 7.3.0
apache_beam: 2.46.0
Test environment: Docker container running on top of Ubuntu (Github Actions CI machines)
Test command:
Config
.coveragerc
Code
Link to your repository, gist, pastebin or just paste raw code that illustrates the issue.
If you paste raw code make sure you quote it, eg:
The text was updated successfully, but these errors were encountered: