Skip to content

Latest commit

 

History

History
40 lines (23 loc) · 3.02 KB

replays.md

File metadata and controls

40 lines (23 loc) · 3.02 KB

Replays

Replays are .zplay text files that ZC can create and play back to replay a game session. All button inputs and RNG seeds are recorded so that ZC can recreate the same playthrough frame-by-frame. Here is a demo.

This system was created primarily as a way to create automated tests. For more on that, see tests/run_replay_tests.py.

A new save file can be recorded by selecting the ZC > Replay > Enable recording new saves menu option. This will make any new save file also be recorded. You'll see an alert box telling you where the file will be saved. You can save and continue whenever without harming the recording process.

At any time, you can playback a replay in the ZC > Replay menu to watch it again, and can even take manual control whenever you want. This will not modify any exiting saved games. Note that you currently aren't be able to persist a save file when replaying a .zplay from this menu.

Later work on this system may introduce more user-facing features, such as savestates, rewinding, or creating a new game file from a replay.

In the meantime, a great way to contribute to ZC development is to enable the recording feature for your new games, and provide us your .zplay files. You don't have to finish the game, any amount of playthroughs could be helpful. The more we have, the better coverage our testing system will have, and the fewer regressions/compatability bugs there will be!

Technical details

Replays are implemented in replay.cpp.

The player emits .zplay.result.txt files when the replay system is active. This allows for external programs (our python test scripts) to poll the results as the program runs.

The replays tests run in CI live in tests/replays, and can be run like this: python3 tests/run_replay_tests.py --filter tests/replays/classic_1st.zplay. The results are summarized in .tmp/default/test_results.json.

The run_test_workflow.py script can be used to run replays in CI. It can take the results of run_replay_tests.py to run just the set of replays that are currently failing.

The compare_replays.py script creates an HTML report of a baseline replay tests run and failing replay tests runs.

In CI, when replays fail, the replays are ran on a working baseline commit to collect snapshots of the frames in question. An HTML report is generated and uploaded to surge.sh, which is then pinged over to Discord.

TODO: there's a lot more to say about how it actually works. replay versions, zplay format, assert mode, snapshots, etc...

Test coverage

  • pip install gcovr
  • cmake -S . -B build -G 'Ninja Multi-Config' (can skip if already configured a ninja multi-config build)
  • cmake --build build --config Coverage -t zplayer
  • python tests/run_replay_tests.py --build_folder build/Coverage --replay --ci
  • bash tests/generate_coverage_report.sh
  • open tests/.coverage/report/index.html

Coverage reports are hosted at https://armageddongames.github.io/ZQuestClassic/coverage and are updated automatically.