-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using pytest-pyvista to improve tests of geovista #429
Comments
@tkoyama010 Cool. Ultimately I'm aiming to use matplotlib/pytest-mpl#150. This may complement the testing framework you're suggesting? |
Nice. It might be a good idea to encourage PyVista to move from |
@tkoyama010 We've been using the perceptual image hash approach on SciTools/iris for a few years now, so I'm pretty satisfied that it gives the stability and control to detect image differences and detect levels of image similarity. I'm keen to get my PR merged on I'll take a peek at |
Great. Could you point me to where the talk thread is? I am really interested in it. |
But I've talked to other |
I would expect |
I'm not super familiar to pytest-mpl but these packages look very similar. One problem with using pytest-mpl even if it becomes more generic is that it will require a lot of boilerplate and tinkering to get testing to work with pyvista. pytest-pyvista hooks natively into PyVista, so you can do: def test_my_plot(verify_image_cache):
pv.Sphere.plot() and it will do the image comparison. In an ideal world in the scientific software field we would have a meta level comparing tool that is agnostic for how the images are generated. Then each tool provides a wrapper for generating those images. But practically and selfishly, I think it benefits the pyvista community more if we make pytest-pyvista more robust. If we can offload parts of that pipeline to pytest-mpl, I think that would be great. It gets slightly closer to my ideal state above. pytest-pyvista, in my opinion, has a more flexible interface. It uses a fixture, so it allows you to do more custom things like for example doing multiple image comparisons in one test, turning off and then back on image capturing inside a test, etc.... |
After glancing at your PR in pytest-mpl, I get the sense that it adds the ability to do different kinds of comparison. But it doesn't much touch the meta-level problem I highlighted above? I need to learn more though so please educate me. I think there is a strong opportunity to replace the homegrown image comparison in pytest-pyvista with a better supported comparison tool. This will not be a tiny effort in PyVista as the are a lot of tests in pyvista that generate images with a lot of variance based on architecture, etc. |
Can you point to the discussion on making pytest-mpl hook better into other image generation tools? Edit: Sorry for the noise, Somehow I missed that it is linked above. |
Thank you for your valuable input!
@bjlittle We would appreciate it if you have any insight on this. |
@MatthewFlamm Seems like you found it and chipped into the conversation on matplotlib/pytest-mpl#201 👍 |
@MatthewFlamm I'm also going to make time to understand better Also, would you guys be open to extending it's support to optionally use perceptual image hashing? I'd happily contribute that functionality if you wanted it? |
I am 100% in favor of opening it up for more kinds of image hashing if it doesn't make things too complicated. I slightly worry that we will have to think about maintaining those in One thing that we'd need to think about is that the success criteria in pytest-pyvista is really geared heavily towards the home-grown tool. So it might require a bit of an overhaul of how this is specified, for example if other image comparison tools have different kinds of criteria or multiple criteria. But making it more usable for more packages is the reason we split it out of pyvista in the first place. So, I would support this sort of re-work. To your question about what technique it uses: which is from True to form, we use vtk itself to do the comparison. |
@MatthewFlamm Yes, it's interesting that Also, take a look at the issue matplotlib/pytest-mpl#146 where I argued the case for With regards to metadata, do you mean that on If so, I believe that perceptual image hash solves this problem. But maybe I'm misunderstanding you, sorry. |
It is simple to apply the fixture to all the tests in a module or test suite or test class, etc. I'm not sure that it is so easy with a decorator this way. One interesting thing we can do with a fixture is that it can be interacted with inside the test itself in a flexible way. For example, we do something like this in our tests: def test_something(verify_image_cache):
# Allows the code to run, but don't use the image compare check for a vtk version
verify_image_cache.skip = pv.vtk_version_info > (9, 1) This is a really compelling type of usage. You might be able to do the same in a decorator for this exact case, but imagine that you have two parts to this test and only the first part depends on the needing the vtk version. Or what if you want to use a higher image regression threshold for the first image in a test, but a second image regression threshold for the second image in the same test. All of these things are straightforward to do with the fixture approach. |
On the question on architecture, we do not save multiple images for different architectures. I think this is due to the overhead for this, but the current structure of pytest-pyvista would allow for this (see below). We have selective tests where we entirely skip the image check on a different architecture. In other cases we specify a different success criteria to account for differences in architecture or vtk version. This is another reason that the fixture approach is very nice. You can create a wrapper fixture that would override the baseline image path depending on the architecture and set Edit: Psuedo-code to illustrate this better: @pytest.fixture(autouse=True)
def verify_image_cache_arch(verify_image_cache):
if sys_arch == "windows":
verify_image_cache.cache_dir = "path/to/windows_baseline_images/"
elif sys_arch == "linux":
verify_image_cache.cache_dir = "path/to/windows_baseline_images/"
yield verify_image_cache Edit2: You can also appreciate how you could implement this sort of split baseline images only for a certain classes of test in a downstream package too. It is really flexible this way. |
@MatthewFlamm Yes, it's interesting that Also, take a look at matplotlib/pytest-mpl#146 where I argued the case for With regards to metadata, do you mean that on If so, I believe that perceptual image hash solves this problem. But maybe I'm misunderstanding you, sorry. |
Awesome, thanks @MatthewFlamm for taking the time to clarify and explain, much appreciated 🍻 |
📰 Custom Issue
pytest-pyvista is a plugin to test PyVista plot outputs.
We can add tests of geovista's plotting using this.
The text was updated successfully, but these errors were encountered: