Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mechanism for randomizing test parametrizations #649

Open
Time0o opened this issue Oct 24, 2024 · 1 comment
Open

Add mechanism for randomizing test parametrizations #649

Time0o opened this issue Oct 24, 2024 · 1 comment

Comments

@Time0o
Copy link

Time0o commented Oct 24, 2024

Description

There is an older issue which sort of goes in this direction: #75 which the final word being that this does not fall in the scope of this plugin.

I am not a frequent pytest user but it seems to me that the following is both a valid and common use case that should be supported by this plugin (rather than yet another randomization mechanism):

Let's say I have a reference implementation of a function f and another implementation (maybe using a more optimized algorithm or similar) g for which I want to assert that its behavior is that same as fs. So I could write a test like:

@pytest.mark.parametrize(a, [1,2,3])
@pytest.mark.parametrize(b, [1,2,3])
def test_g_implements_f(a, b):
    assert g(a, b) == f(a, b)

All well and good, but what if the space of valid parameter combinations is very large and f is blackbox-y enough that I can't say exactly what all of its corner cases are? Then I would like to randomly sample the entire parameter space. That probably goes beyond the scope of pytest-randomly. So I would write my own decorator along the lines of:

@parametrize_random([
        (a, list(range(1000))),
        (b, list(range(1000))),
    ],
    samples=100,
)
def test_g_implements_f(a, b):
    assert g(a, b) == f(a, b)

That is better than e.g. generating random parameter inside a for loop inside the test because then all parametrizations can run independently. Now I would most likely want to use the same random seeding mechanism pytest-randomly uses to seed at the start of every test inside parametrize_random and there seems to be no easy way to do that. Should there be or is there a better solution to this?

@adamchainz
Copy link
Member

There may be a solution. Try seeing what happens if your parametrize_random decorator uses the plain random.choice functions and you extend pytest-randomly with a hook that runs early, like maybe before test collection starts, to call its "reseed" method. I'd be happy to review a PR - include a test or two, changelog note, and docs update.

By the way, for sampling a parameter space, it's much better to use a tool like Hypothesis. See the many PyCon talks for more info.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants