-
-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Results regression in system tests #594
Comments
Without changing anything from our side (I think), the Nutils and SU2-FEniCS cases have been fixed. The elastic-tube-1d python-python remains broken: https://github.com/precice/precice/actions/runs/11869063712/job/33078975390?pr=2052 Maybe a Python dependency? |
Could we compare which NumPy version was used for the reference data and which one now? Is this in our data? |
No, unfortunately this is not yet in our data, but we could implement a |
I am now investigating the results themselves. fieldcompare writes also diff files, but these are the absolute differences. We currently use a relative tolerance of 3e-7 (see #393). Comparing runs on the tests VM and on my laptopIn both systems, I have ran a
Already this probably hints that our thresholds are maybe too tight (even though I cannot definitely explain why). Elastic tube 1dOpening the file groups related to the
At least for the CrossSectionLength, one could argue that the diff is small, but the values themselves are also small, making all this more prone to floating-point errors. Trying to reproduce the reference resultsRunning the following test locally (again after a
(using the versions from The same tests are failing with regressions, so it is either something in the python-bindings/tutorials (which I think I have previously excluded), or something outside our control. How to move onDo we now accept this as the new baseline, or do we keep digging? Do we already relax the tolerances? Should we maybe introduce a difference tolerance per test case? |
This statement is wrong. Floating-point errors have nothing to do with the size of the values as long as we don't under- or overshoot. Let's always look at relative errors. Both are around 1e-7. Concerning tolerances I think we have two choices:
I find the first option better. When running into a regression, we should always try to find out why. But this will not always be possible. Good would be to freeze as much as possible outside our control (OpenFOAM, NumPy, ...) and specifically test when updating such dependencies. Could we somehow try to run the python-python elastic tube 1D with different NumPy versions? |
All Python-related versions are now defined in the Besides the specific versions, we currently don't have a way implemented to propagate Python versions into the venv of each test. Even the pyprecice version is essentially ignored, see #584. We get everything else from APT, so versions should more or less be frozen as long as we stick to the same Ubuntu version. |
I compared different NumPy versions and different versions of the Python bindings. I don't see any other dependencies of the elastic tube 1D python that could have changed. The tutorial itself did neither change recently. Lapack is fixed through the OS. I am puzzled as well.
I would suggest to define your reproduction above as the new baseline.
I think we should fix them. We could also add a |
As discussed in the coding days (Nov 2024), we have a strange regression in some results compared to the reference results. See original discussion in precice/precice#2131.
I have excluded changes in the preCICE repository, as I get the same issue with preCICE v3.1.2 (results were actually produced with v3.1.1, but this should not matter).
Based on where we have regressions, I suspect something related either to the Python bindings or to the Python dependencies.
Overview
Results regression ❌
Works ✔️
The text was updated successfully, but these errors were encountered: