Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added task_id to Tests and Test Reports #65

Merged
merged 13 commits into from
May 26, 2021

Conversation

BethanyG
Copy link
Member

No description provided.

@BethanyG BethanyG marked this pull request as draft May 26, 2021 00:19
@BethanyG
Copy link
Member Author

WIP-ing this until I get the golden tests updated.

@BethanyG BethanyG marked this pull request as ready for review May 26, 2021 03:21
@BethanyG
Copy link
Member Author

BethanyG commented May 26, 2021

@cmccandless - So now all my added tests are passing. But I am worried. I had to generate the golden files for my added tests using the run-in-docker.sh script, which tells me that we do indeed have different results when using run.sh vs run-in-docker.sh vs plain pytest. Exactly the scenario we're trying to avoid.

We also definitely have a bug weirdness when capturing stdout in conjunction with a subtest. It appears that stdout is not flushed when using the subtest context -- or at least that is not happening when you invoke run.sh from the command line.

To see this in action, edit example_partial_failure_with_subtest.py to include a print("Captured stdout from user!!") statement, and then run run.sh for it. The results.json ends up with output from all subtests written into the main test result.

I feel like it's not fatal - just really ugly (and should be fixed), so I'll log a bug in the morning, and see if I can figure out how to fix it. Any thoughts/insights/pointers you have to that end would be wonderful. 😄

Logged as Issue 67

@cmccandless
Copy link
Contributor

That does sound messy... I love the maintainability of subtests/parameterization, but it's proving to be a real headache with tooling. We might want to discuss the possibility of breaking out the subtests into individual tests.

@BethanyG
Copy link
Member Author

Yup. I don't want to, but if we keep running into issues like this we'll have to. I am hoping this is a small adjustment. But if not, we will have to either do individual tests, or maybe use PyTest to parameterize, as opposed to unittest.subtest. However, replacing unittest.subtest carries a larger refactor of code, since we filter the AST using unittest class definitions.

@BethanyG
Copy link
Member Author

BethanyG commented May 26, 2021

Huh. It appears the bug is not about subtests. It looks like stdout is not being captured when a test fails -- only when it succeeds. Or in the case of the subtest scenario, the parent is collecting the stdout from the children, which is ... actually as expected , when you think about it. In a success, you'd only have the stdout for each subtest, because there isn't a report generated for a successful test.

But now we have to figure out how we get the stdout from failing tests, because that's not coming through. But I don't think it ever did. 🤔 And thats a huge problem, since that doesn't conform to the runner spec.

Logged as Issue 66

@BethanyG BethanyG marked this pull request as draft May 26, 2021 04:22
@BethanyG
Copy link
Member Author

TL;DR: I think this can be reviewed/merged, and then we work on the bugs.

@BethanyG BethanyG marked this pull request as ready for review May 26, 2021 05:36
Comment on lines +100 to +105
# Changes status of parent to fail if any of the subtests fail.
if state.fail:
self.tests[parent_test_name].fail(
message="One or more subtests for this test failed. Details can be found under each variant."
)
self.tests[parent_test_name].test_code = state.test_code
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the subtests module not already do this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤣 No. The subtests module makes it possible to use subttest without PyTest freaking out. There is actually an ongoing discussion on the pytest-subtest repo on how to handle failures and test counting right now. And - as the discussion outlines - this is actually behavior inherited from unittest. So for our plugin, I decided to make it "cleaner". But it is still weird to have a count mis-match when all tests pass vs when some that have sub-tests fail (this happens with all parameterization, in Unittest/Pytest if I am remembering correctly).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...I'd actually love to do a refactor of our runner that treats parameterization more like a matrix -- so that there is a clean count of which tests are parameterized, then a process that "explodes" the matrix into individual cases, then makes a summary. But I don't think we want to do that right now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No probably right now, but that sounds like a good idea.

@BethanyG BethanyG merged commit dd7dc03 into exercism:main May 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants