Test fixtures are not cleaned up after a test run and may be stale on subsequent runs #391
Replies: 5 comments 3 replies
-
Thanks for sharing your thoughts! |
Beta Was this translation helpful? Give feedback.
-
The problem I had specifically is that when doing A/B testing I'd toggle between two different versions of a fixture script (let's say CRC=1 and CRC=2). The problem is that the output is always preserved and because the CRC value never changed beyond 1 or 2 the output got reused for the next test run (creating either false success or false failure depending). My solution was to manually remove the My proposal is to have For the archival portion I think it's pretty well agreed that archiving the fixture output should be gated on an environment variable (e.g.
|
Beta Was this translation helpful? Give feedback.
-
I agree, and don't understand how a script can change without changing its CRC. In other words, CRC=1 refers to a state and CRC=2 refers to another, there should be not collisions, and the state should always be the desired one. Apparently I am still missing something. The only reason to remove the
This can't happen, it's paramount that tests run fast and no work should be done unless needed. The state produced by scripts is immutable. When state needs to be changed by a fixture, there is
Right now an archive is created each time a fixture is created as the script is run, and all archives currently checked in using I hope you can bear with my while I try to understand - I see potential improvements are possible but fail to see the issue the system has, unfortunately. |
Beta Was this translation helpful? Give feedback.
-
Ah, I was toggling between two different versions of a fixture script. Version A that had a CRC of 1 and version B that had a CRC of 2. There was no CRC collision, but because the output is not removed (on failure) when I went from version B to version A the script was not re-run. IMO this is more likely to be an issue when you're actively developing a fixture script. I suggested including the fixture script's mtime as part of the CRC input because that would give an easy way to force a fixture script to be re-run. But in a more broad sense cleaning up after the fixture script seems like a good idea to me. |
Beta Was this translation helpful? Give feedback.
-
Yeah, the issue I had is that I was going back and forth between versions of the script I'd used before. What I'd like to see is the ability to re-run the script with no intervention. In this case leaving the output in place on failure creates the problem I've run into. Using the mtime of the script would alleviate that (a.k.a. run the script if it's not been run before or if the script's mtime is newer than that of the output directory). |
Beta Was this translation helpful? Give feedback.
-
Per #382 the default behavior with test fixtures is:
Potentially more desirable behavior is:
I'm thinking the default behavior should be to leverage
TempDir
with a check at the end to see if an environment variable is set. If the variable is set, archive the fixture directory before theTempDir
object goes out of scope.Alternatively the existing logic (hashing the contents of the fixture script and placing the output in a directory within the repo) could be retained.
With either implementation the question is whether it's more desirable to move towards putting the test logic into closures (for more automatic cleanup) or to add an explicit check at the end of each test.
Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions