Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"check" convenience shortcut for subtests fixture #28

Closed
wants to merge 1 commit into from

Conversation

maxnikulin
Copy link
Contributor

  • Support nested subtests.test() contexts similar to
    unittest.TestCase.subTest() chaining of argument sets.
  • Make subtests instance a re-entrable context manager
    similar to subtests.test().
  • Make subtests instance a callable that sets context (scope)
    for further subtests or for nested scopes only if return
    value is used as a context manager.
  • Add check concise alias for the subtests fixture.

Essentially, the goal is to adapt pytest-subtests for checking
of multiple aspects of compound objects, e.g. a class fields, withing a
single unit test and to get report with all failures at once instead of
the first error only.

from datetime import datetime

def test_fields(check):
    dt = datetime.utcfromtimestamp(1234567890)
    with check: assert dt.year == 2009, "Year is OK"
    with check: assert dt.month == 1
    with check: assert dt.day == 13, "Day is OK"
    with check: assert dt.hour == 27

Use with --tb=short option.

- Support nested `subtests.test()` contexts similar to
  `unittest.TestCase.subTest()` chaining of argument sets.
- Make `subtests` instance a re-entrable context manager
  similar to `subtests.test()`.
- Make `subtests` instance a callable that sets context (scope)
  for further subtests or for nested scopes only if return
  value is used as a context manager.
- Add `check` concise alias for the `subtests` fixture.

Essentially, the goal is to adapt `pytest-subtests` for checking
of multiple aspects of compound objects, e.g. a class fields, withing a
single unit test and to get report with all failures at once instead of
the first error only.

    from datetime import datetime

    def test_fields(check):
        dt = datetime.utcfromtimestamp(1234567890)
        with check: assert dt.year == 2009, "Year is OK"
        with check: assert dt.month == 1
        with check: assert dt.day == 13, "Day is OK"
        with check: assert dt.hour == 27

Use with `--tb=short` option.

The bright idea of context manager shortcut belongs to @ionelmc:
okken/pytest-check#9
@maxnikulin
Copy link
Contributor Author

Let's add some sugar to make writing tests more convenient. The bright idea is borrowed from okken/pytest-check#9 (@ionelmc), adapted for pytest-dev/pytest-subtests, and extended a bit.

New fixture check is added, its name is short and it matters. It is similar to subtests.test(), but it is context manager and it is callable. Wisely combined with traditional asserts at the top level it allows to write concise tests for compound types with failure reports containing several errors at once. The intention of the catch all approach is to help to figure out what happened in the code with less debugging.

from datetime import datetime

def test_fields(check):
    dt = datetime.utcfromtimestamp(1234567890)
    with check: assert dt.year == 2009, "OK"
    with check: assert dt.month == 1
    with check: assert dt.day == 13, "OK"
    with check: assert dt.hour == 27

Output with --tb=short option:

=================================== FAILURES ===================================
___________________________ test_fields (<subtest>) ____________________________
test_fields.py:7: in test_fields
    with check: assert dt.month == 1
E   assert 2 == 1
E    +  where 2 = datetime.datetime(2009, 2, 13, 23, 31, 30).month
___________________________ test_fields (<subtest>) ____________________________
test_fields.py:9: in test_fields
    with check: assert dt.hour == 27
E   assert 23 == 27
E    +  where 23 = datetime.datetime(2009, 2, 13, 23, 31, 30).hour

check is an alias for subtests, original variant subtests.test() could be used as earlier. Additional feature is support of nested contexts for details similar to unittest.TestCase.subTest() chained parameters.

The fixtures are callable, so the purpose of checks could be set for a nested scope or for following usages.

from datetime import datetime

def test_details_example(check):
    ts = 1234567890
    dt = datetime.utcfromtimestamp(ts)
    with check(timestamp=ts):
        check("date")
        with check: assert dt.year == 2009, "OK"
        with check: assert dt.month == 1
        with check: assert dt.day == 13, "OK"
        check("time")
        with check("hour"): assert dt.hour == 27
        with check: assert dt.minute == 13
=================================== FAILURES ===================================
______________ test_details_example [date] (timestamp=1234567890) ______________
test_details_example.py:10: in test_details_example
    with check: assert dt.month == 1
E   assert 2 == 1
E    +  where 2 = datetime.datetime(2009, 2, 13, 23, 31, 30).month
______________ test_details_example [hour] (timestamp=1234567890) ______________
test_details_example.py:13: in test_details_example
    with check("hour"): assert dt.hour == 27
E   assert 23 == 27
E    +  where 23 = datetime.datetime(2009, 2, 13, 23, 31, 30).hour
______________ test_details_example [time] (timestamp=1234567890) ______________
test_details_example.py:14: in test_details_example
    with check: assert dt.minute == 13
E   assert 31 == 13
E    +  where 31 = datetime.datetime(2009, 2, 13, 23, 31, 30).minute

timestamp keyword argument is shared by all scopes, "time" msg argument is restored after completion of the msg="hour" scope.

When subtest failure leads to test failure (see #27), summary report becomes less confusing.

@maxnikulin
Copy link
Contributor Author

Please, consider, this pull request as a proposal to be discussed.

I suppose that brief form of context manager feature of pytest-check is the most valuable one. It makes convenient enough using assert thus dedicated assertion methods becomes unnecessary. The feature is suitable as an extension of pytest-subtests package. In my opinion both packages are in the proof of concept stage and support complementary sets of features expected from pytest core or unittest.

I was looking for a python way to allow non-fatal (soft) assertion similar to EXPECT_EQ from "Googletest Primer. Assertions" to get thorough failure reports that catch all problems with compound types at once. Stackoverflow suggests unittest.subTest() but it is too long for typing and test code readability. Rewriting AST to make some function, e.g. expect(), special requires enough work. The idea of with check: is very close to what I was originally expecting.

Currently I am not happy with pytest report summary, but it is loosely related to the changes. Final list of failures is not informative enough to identify particular failed check. Assert message is not the best one. I have tried to include subtest description to test nodeid but it negatively affects progress report. Unfortunately subtests are not first-class citizens in pytest.

The goal is failure reports that shed as much as possible light on the problem with code without following debugging and even without excessive scrolling of the report. So I would like to here ideas of further improvements.

@ionelmc
Copy link
Member

ionelmc commented Jun 18, 2020

I believe that only the message should be customizable in the output (eg: be able to replace the "(<subtest>)" with "(some message that gives context)"). I just don't see an usecase for arbitrary kwargs, but I do see one for markers ;)

If you want to take kwargs go for that, eg:

with check(skipif=...): ...
with check(xfail=...): ...

or

with check(mark=skipif(...)): ...

Using those kwargs for markers (behavior changing) is way more useful than more output cunstomization (that you can do with the msg anyway).

@maxnikulin
Copy link
Contributor Author

Keyword arguments are mostly shortcut for check("time {}".format(dict(timestamp=ts))). But since they are accumulated for nested scopes they might be useful when subtest calls a helper that has its own subtests.

As to skipif, it could be expressed through something like

if pytest.checkskip(...):
    with check: assert a == b

The greater problem with subtest expected failures and skips is accounting in the parent test. In my opinion subtest result is less important than test result. Though the former affects the latter, the counters should be different. nodes.Item should have some features of container but not Collector since it is impossible to determine the number of subtests in advance. There is #11 for accounting issues. The code converting failure to xfail should not be duplicated.

As a possible alternative if "xfail" mark is desired for subtests

with check.xfail: assert 1 == 2

could be considered.

@ionelmc
Copy link
Member

ionelmc commented Jun 18, 2020

Well I guess I don't see myself using nesting. As a matter of fact I'm wondering what are the actual usecases for nesting/scoping. I guess a fixture using the check cm to do some asserts before returning whatever, but would that actually work? Should be part of the test suite I think.

@nicoddemus
Copy link
Member

Thanks @maxnikulin but I think this is out of scope for this plugin: my intent is to have the same characteristics of TestCase.subTest, so I would prefer not to add more features to it other than the necessary.

FWIW I use pytest-regressions for your use case:

from datetime import datetime

def test_fields(data_regression):
    dt = datetime.utcfromtimestamp(1234567890)
    data = dict(
        year=2009,
        month=1,
        day=13,
        hour=27,
    )
    data_regression.check(data)

@maxnikulin
Copy link
Contributor Author

@ionelmc, sorry, I do not have real life examples of scopes. As to nesting I was trying to mimic unittest.TestCase.subTest() behavior. I remember painful time waste with badly implemented helper for CPPUNIT that was used for polyline point comparison. Independently of point index error message always contained line number inside the helper. Proper comparison macros or something like explicit invocation ofSCOPED_TRACE from gtest would allow to quickly identify source of problem just by looking at the test report. Python is much less succeptible to such problems since tracebacks (and optionally locals) are included to failure description. Combining of point index as keyword argument and some message inside a helper for points comparison still could be convenient

@maxnikulin
Copy link
Contributor Author

FWIW I use pytest-regressions for your use case:

pytest-regression is useful but it addresses a bit different problem. My example is not the best one. Description of pytest-regression correctly addresses the issue that all inconsistencies should be reported, not the first one.

Sometimes I prefer to have data in the test code to grasp everything at a glance without browsing through files scattered over various directories

Sometimes I feel more confident when some invariants are checked through function calls in addition to plain data fields. Certainly reference data must obey the invariants
but it is tempting to copy function result to the test. pytest-regression could catch change of behavior but not the original bug.

@maxnikulin
Copy link
Contributor Author

I think this is out of scope for this plugin: my intent is to have the same characteristics of TestCase.subTest

I would not mind if unittest.TestCase had caller and context manager features for subtests as an equivalent to self.subTest()

    with self: self.assertEqual(2009, tt.tm_year)

however plain assert instead of self.assertEqual is more convenient.

In my opinion support of unittest.TestCase.subTest() must be in the pytest core, not in some plugin just because subTest is a part of standard library. The way of extension of TestCaseFunction is rather a hack.

Notice that unittest.TestCase supports chaining of subTest() scopes even a bit better than current change set (https://bugs.python.org/issue30664). Do you think that subtests fixture should act in a similar way?

I believe the weak point is error reporting, both stack trace and progress/summary. I have not realized where I would like to see extension points in pytest core. Currently I would prefer short stack trace for subtests by default. Maybe it is worth allowing a test to have multiple exceptions to postpone output to the make report hook. It would be nice to see msg+kwargs in progress and summary reports to quickly identify failed parts.

Actually check shortcut is a thin extension of the subtests fixture (it required extension points are not blocked) that is why I do not like an idea of providing third variant of error reporting and independent implementation of xdist, xfail, fail fast, etc. in my own plugin.

Feel free to just close this pull request or to adapt part related to nested scopes if you wish.

@nicoddemus
Copy link
Member

In my opinion support of unittest.TestCase.subTest() must be in the pytest core, not in some plugin just because subTest is a part of standard library. The way of extension of TestCaseFunction is rather a hack.

Indeed that's the original plan: to put the functionality into a plugin and eventually merge it into the core. Things have been moving very slowly on that front though, I admit.

But for this reason I don't think we should deviate from what's available in Test.subTest, at least for now: the slimmer the plugin ends up, and being useful in general, better the chances of it making it into pytest core.

So for now I don't think we should add this convenience to it, but I deeply appreciate the interest and your work in putting out this PR!

@nicoddemus nicoddemus closed this Jun 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants