You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is an older issue which sort of goes in this direction: #75 which the final word being that this does not fall in the scope of this plugin.
I am not a frequent pytest user but it seems to me that the following is both a valid and common use case that should be supported by this plugin (rather than yet another randomization mechanism):
Let's say I have a reference implementation of a function f and another implementation (maybe using a more optimized algorithm or similar) g for which I want to assert that its behavior is that same as fs. So I could write a test like:
@pytest.mark.parametrize(a, [1,2,3])@pytest.mark.parametrize(b, [1,2,3])deftest_g_implements_f(a, b):
assertg(a, b) ==f(a, b)
All well and good, but what if the space of valid parameter combinations is very large and f is blackbox-y enough that I can't say exactly what all of its corner cases are? Then I would like to randomly sample the entire parameter space. That probably goes beyond the scope of pytest-randomly. So I would write my own decorator along the lines of:
@parametrize_random([ (a, list(range(1000))), (b, list(range(1000))), ],samples=100,)deftest_g_implements_f(a, b):
assertg(a, b) ==f(a, b)
That is better than e.g. generating random parameter inside a for loop inside the test because then all parametrizations can run independently. Now I would most likely want to use the same random seeding mechanism pytest-randomly uses to seed at the start of every test inside parametrize_random and there seems to be no easy way to do that. Should there be or is there a better solution to this?
The text was updated successfully, but these errors were encountered:
There may be a solution. Try seeing what happens if your parametrize_random decorator uses the plain random.choice functions and you extend pytest-randomly with a hook that runs early, like maybe before test collection starts, to call its "reseed" method. I'd be happy to review a PR - include a test or two, changelog note, and docs update.
By the way, for sampling a parameter space, it's much better to use a tool like Hypothesis. See the many PyCon talks for more info.
Description
There is an older issue which sort of goes in this direction: #75 which the final word being that this does not fall in the scope of this plugin.
I am not a frequent pytest user but it seems to me that the following is both a valid and common use case that should be supported by this plugin (rather than yet another randomization mechanism):
Let's say I have a reference implementation of a function
f
and another implementation (maybe using a more optimized algorithm or similar)g
for which I want to assert that its behavior is that same asf
s. So I could write a test like:All well and good, but what if the space of valid parameter combinations is very large and
f
is blackbox-y enough that I can't say exactly what all of its corner cases are? Then I would like to randomly sample the entire parameter space. That probably goes beyond the scope of pytest-randomly. So I would write my own decorator along the lines of:That is better than e.g. generating random parameter inside a
for
loop inside the test because then all parametrizations can run independently. Now I would most likely want to use the same random seeding mechanismpytest-randomly
uses to seed at the start of every test insideparametrize_random
and there seems to be no easy way to do that. Should there be or is there a better solution to this?The text was updated successfully, but these errors were encountered: