You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our integration tests include a mechanism to check an actual execution trace (method calls, params, return values, events etc) against an expected one. The trace is not often the primary endpoint of a test, but provide a way to check some of the internal steps of a multi-actor execution which can be difficult to verify by other means.
These expectations are hand-crafted. Crafting them takes quite some time, and they also lead to a lot of noise in the test files. Because of this, some integration tests don't have traces, and some of the available trace expectations are often omitted (some fields are optional).
A better way of doing this would be to generate the expected traces by running the test code, and check in those expectations as files. This would mean:
all tests have expectations that are checked
all fields in all expectations could be populated
authors & reviewers gain complete transparency into the internal execution
whenever anything changes, a diff in expectations is created and subject to code review
no additional effort is required when writing new tests
After the initial work to create something which generates the trace files, this could provide both better coverage and require less ongoing effort to maintain.
The text was updated successfully, but these errors were encountered:
I'd really like this to be a thing, currently it's far too easy to None these things so the expedient path is to just opt out of things that don't feel relevant, but that just decreases coverage. I'm noticing that pattern in #1540 for events, which would be great to have if they were easy enough to generate, sanity-check, and backfill.
Our integration tests include a mechanism to check an actual execution trace (method calls, params, return values, events etc) against an expected one. The trace is not often the primary endpoint of a test, but provide a way to check some of the internal steps of a multi-actor execution which can be difficult to verify by other means.
These expectations are hand-crafted. Crafting them takes quite some time, and they also lead to a lot of noise in the test files. Because of this, some integration tests don't have traces, and some of the available trace expectations are often omitted (some fields are optional).
A better way of doing this would be to generate the expected traces by running the test code, and check in those expectations as files. This would mean:
After the initial work to create something which generates the trace files, this could provide both better coverage and require less ongoing effort to maintain.
The text was updated successfully, but these errors were encountered: