-
-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added task_id to Tests and Test Reports #65
Conversation
WIP-ing this until I get the golden tests updated. |
@cmccandless - So now all my added tests are passing. But I am worried. I had to generate the golden files for my added tests using the We also definitely have To see this in action, edit I feel like it's not fatal - just really ugly (and should be fixed), so I'll log a bug in the morning, and see if I can figure out how to fix it. Any thoughts/insights/pointers you have to that end would be wonderful. 😄 Logged as Issue 67 |
That does sound messy... I love the maintainability of subtests/parameterization, but it's proving to be a real headache with tooling. We might want to discuss the possibility of breaking out the subtests into individual tests. |
Yup. I don't want to, but if we keep running into issues like this we'll have to. I am hoping this is a small adjustment. But if not, we will have to either do individual tests, or maybe use PyTest to parameterize, as opposed to |
Huh. It appears the bug is not about subtests. It looks like But now we have to figure out how we get the Logged as Issue 66 |
TL;DR: I think this can be reviewed/merged, and then we work on the bugs. |
# Changes status of parent to fail if any of the subtests fail. | ||
if state.fail: | ||
self.tests[parent_test_name].fail( | ||
message="One or more subtests for this test failed. Details can be found under each variant." | ||
) | ||
self.tests[parent_test_name].test_code = state.test_code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the subtests module not already do this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤣 No. The subtests module makes it possible to use subttest
without PyTest freaking out. There is actually an ongoing discussion on the pytest-subtest
repo on how to handle failures and test counting right now. And - as the discussion outlines - this is actually behavior inherited from unittest
. So for our plugin, I decided to make it "cleaner". But it is still weird to have a count mis-match when all tests pass vs when some that have sub-tests fail (this happens with all parameterization, in Unittest/Pytest if I am remembering correctly).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...I'd actually love to do a refactor of our runner that treats parameterization more like a matrix -- so that there is a clean count of which tests are parameterized, then a process that "explodes" the matrix into individual cases, then makes a summary. But I don't think we want to do that right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No probably right now, but that sounds like a good idea.
No description provided.