-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create Savanna unittests modeled after the fast testnet wave tests. #380
Closed
Comments
Re-opening issue for the implementation of the last four tests related to |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We will use the
savanna_cluster
class to implement the following test scenarios derived from this documentTest setup
Common prerequisites:
savanna_cluster
functionalityproduce_block(s)
call.Definitions for unit tests
shutdown A: means that
close()
is called for A's tester, which doescontrol.reset(); chain_transactions.clear();
restart A: means that
open()
is called for A's tester, which restarts the node using the existing state.fsi: finalizer safety information (
finalizers/safety.dat
)head: the chain head for a node, queried from the controller and retrieved within a test using
tester::head()
lib: the last irreversible block id for a node, as reported by the
irreversible_block
signal, and retrieved within a test usingtester::lib_id
state: memory mapped file holding the chainbase state (
state/shared_memory.bin
directory)blocks log: files holding irreversible blocks (files
blocks/blocks.log
andblocks/blocks.index
)reversible data: files holding reversible blocks data (located in
blocks/reversible
)finality violation: defined as the existence of 2 final blocks where none is an ancestor of the other.
confirm a finality violation: Finality violations can be confirmed by showing that the libs of two nodes are in conflict.
confirm no finality violation: The abscence of a finality violation can established if, on a reconnected network, heads can be propagated without unlinkable blocks.
unit tests: Disaster recovery
Single finalizer goes down
[sd0] recovery when nodes go down
[sd1] Recover a killed node with old finalizer safety info
[sd2] Recover a killed node with deleted finalizer safety info
[sd3] Recover a killed node while retaining up to date finalizer safety info
All but one finalizer nodes go down
Tests are similar above, except that C is replaced by the set { B, C, D }, and lib stops advancing when { B, C, D } are shutdown
[md0] recovery when nodes go down
[md1] Recover a killed node with old finalizer safety info
[md2] Recover a killed node with deleted finalizer safety info
[md3] Recover a killed node while retaining up to date finalizer safety info
All nodes are shutdown with reversible blocks lost
[rv0] nodes shutdown with reversible blocks lost
lib_id
) and head block ID (h_id
) - the snapshot blocklib_id
's child which was lostlib_id
(because validators are locked on a reversible block which has been lost, so they cannot vote any since the claim on the lib block is just copied forward and will always be on a block with a timestamp < that the lock block in the fsi)Finality violation
Goal is to identify a finality violation, defined as the existence of 2 final blocks where none is an ancestor of the other.
[fv1] Validate network can tolerate 1/4 fault
[fv2] split network when one node holds two finalizer keys
5 *_block_interval_us
), verify lib advances (because quorum met, thanks to D voting with B's key in addition to its own)[fv3] restore split network when one node holds two finalizer keys
5 *_block_interval_us
)[fv4] Validate network cannot tolerate 2/4 fault
5 *_block_interval_us
), verify lib advances (because together C and C hold three keys)5 *_block_interval_us
). verify that lib advances on A and B, and that unlinkable blocks are received on C and D.unit tests: Savanna transition testing
For these tests, the cluster will be started in a pre-savanna configuration, but with finalizer keys initialized and set to one per node as before.
[st0] straightforward transition
setfinalizer
action on node A, with a policy where each nodes has one vote.[st1] transition with split network before critical block
setfinalizer
action on node A, with a policy where each nodes has one vote.snapshot_00
andsnapshot_01
files, and preserve them[st2] restart from Snapshot at beginning of transition while preserving fsi
setfinalizer
action on node A, with a policy where each nodes has one vote.[st3] restart from Snapshot at end of transition while preserving fsi
setfinalizer
action on node A, with a policy where each nodes has one vote.[st4] restart from Snapshot at beginning of transition without preserving fsi
Very similar to [st2], only difference is that fsi are removed before restarting the nodes.
setfinalizer
action on node A, with a policy where each nodes has one vote.unit tests: Finalizer policy testing
[fp0] policy change
[fp1] policy change including weight and threshold
[fp2] policy change: reduce threshold, replace all keys
[fp3] policy change: restart from snapshot
The text was updated successfully, but these errors were encountered: