-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IF: Implement message propagation #1676
Conversation
- propagate proposals if they are added by the node to its proposal store - non-leaders propagate votes of known proposals that haven't been propagated by them yet - added structure to track seen votes for each proposal (GC'd together with proposals by height) - propagate new view messages that have a High QC that's newer than what we had locally - enabled basic propagation test - added star topology propagation test - added ring topology propagation test - documented that new block messages are not propagated - removed topology restriction for new block messages at test_pacemaker - removed hop count from hotstuff_test_handler::dispatch(); will dispatch until network goes quiet Merging both multi-indexes declared in the qc_chain is possible (makes proposal height gc 2x faster) but I'd leave that for later to minimize merge conflicts.
I've run the NP tests here, and before my machine crashed, I got four failing NP tests:
I think my machine crashed while running Inspecting the
In those test directories I can see enormous
They are full of entries like this one:
With message propagation, I don't think that, in general, we want tests to be logging each hotstuff message. There's a whole lot of them going on, depending on what the test topology looks like. This looks like it is exposing a weakness in some of the tests, instead of the tests exposing some vulnerability in the hotstuff implementation and/or this PR. (EDIT:) It could also be a consequence of the hotstuff integration being midway. |
It is a lot of logging, but for now I think it is ok. We probably want to move them to their own logger so we can easily toggle them on/off. For now, I think they actually show the issue. Take for example the attached snip of the log of a failing test. Seems to be caught in an infinite loop. |
It's the new-view message propagation code. if I comment it out, the four failed NP tests pass. My blind assumption about how Well, if any one of the propagated messages was going to loop, it was this one. |
Not sure what's with the LR
I ran the test locally and it passes, actually. |
libraries/hotstuff/qc_chain.cpp
Outdated
@@ -952,6 +988,12 @@ namespace eosio::hotstuff { | |||
void qc_chain::gc_proposals(uint64_t cutoff){ | |||
//fc_tlog(_logger, " === garbage collection on old data"); | |||
|
|||
auto sv_end_itr = _seen_votes_store.get<by_seen_votes_proposal_height>().upper_bound(cutoff); | |||
while (_seen_votes_store.get<by_seen_votes_proposal_height>().begin() != sv_end_itr){ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use range erase to avoid the while loop for simplicity and potential invalid itrs doing the loop, something like
_seen_votes_store.get<by_seen_votes_proposal_height>().erase(sv_begin_itr, sv_end_itr);
@@ -662,7 +693,12 @@ namespace eosio::hotstuff { | |||
_b_leaf = _high_qc.get_proposal_id(); | |||
|
|||
fc_tlog(_logger, " === ${id} _b_leaf updated (update_high_qc) : ${proposal_id}", ("proposal_id", _high_qc.get_proposal_id())("id", _id)); | |||
return true; | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel the return type or name of bool qc_chain::update_high_qc
should be changed. Even though you updated _high_qc
but returns false
if proposal_id() is empty. It is hard to follow in the caller process_new_view
why false
indicates the view should be propagated. Or use another dedicated method to decide whether to propagate. This might have prevented the initial infinite loop bug.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the best is to return whether it was updated, and only strictly update when we are going to propagate. For that I need some help in formalizing the edge cases where e.g. the proposal_id() is empty.
Merging both multi-indexes declared in the qc_chain is possible (makes proposal height gc 2x faster) but I'd leave that for later to minimize merge conflicts.
Remaning issues (as argued at the tail end of #1548) should be addressed in new issues and PRs, such as dealing with connection grief.
NOTE: this is still using eosio-name finalizers, not finalizer keys, but it does not add too much workload for the incoming code to change the added eosio-name code to finalizer key code. (EDIT: This is solved in the latest merge from
hotstuff_integration
into this branch (Oct. 2nd 2023))Closes #1548.
===============================================
Follows the argument as to why this should close #1548 (multi-hop message propagation):
First, it has to be established that a finalizer key in the finalizer set can, in theory, produce an arbitrary number of messages with varied content, which can bloat any structure that is created to track unique messages being propagated in the network. This (in theory) produces network spam, but, crucially, it can crash nodes by draining memory OR cause nodes to drop legitimate messages because they can no longer track duplicates (a bad solution to constraining duplicate-msg-tracking memory) OR cause nodes to circulate massive spam until the finalizer is removed. This has to be addressed by a message propagation solution.
Second, there's the issue of whether to support "any" node with "any" state (i.e. a blank node) as a message relayer or not.
There are only two valid solutions to message propagation.
Solution 1 (previously rejected) involves allocating a fixed amount of memory on each node to track message propagation by hash of raw byte content. This is basically valid (i.e. works) because although there are scenarios where a finalizer that is in the finalizer set could spam the network with authentic messages (i.e. the message IS in fact signed by that finalizer) but that otherwise are garbage, (EDIT:) the memory consumed by tracking that garbage is bounded, and propagation of honest messages still happen, even though the network could in theory start to suffer from re-propagation of the same messages.
Solution 1 would be more geared towards a scenario where you'd want to support any node to be a relayer, irrespective of their state. A Solution 1 to message propagation would be one where you'd find a unit test that has a star topology where the node at the center of the star is killed and rebooted midway the test with an empty qc_chain, and propagation still works. We will argue below that this is NOT a scenario that we even want to support, i.e. we want to actively ensure that this does NOT happen even.
Solution 2 (in this PR) is the opposite: message propagation logic is tightly bound to the protocol itself, and we only propagate messages that can be proven are relevant to advancing the local qc_chain's state. In this scenario, a proposal is only propagated if it is inserted in our proposal store; a vote is only propagated for proposals that we know about AND if we haven't propagated that exact vote before; and a new view is only propagated if the High QC that it contained was actually a relevant update to us. These are conservative criteria, and we would only make them more conservative than this, if we'd knew how.
Solution 2 provides the approach that can (eventually) guarantee that the amount of memory spent to track duplicate messages is bounded. The amount of extra memory required to track duplicate proposals and duplicate new view messages is zero. We can easily prove that new view messages being propagated only when relevant will eventually converge, and that all nodes will receive the high QC given reliable message delivery at every hop (and even if that fails, it doesn't affect safety). We can also prove that proposal propagation from the rightful leader will reach every node; if the algorithm is correct, then inserting the proposal into one's store works as the criteria for proposal propagation. And finally, votes use as much memory as proposals and are garbage collected at the same time and under the same criteria, so if the algorithm is already correct in that regard for bounded memory consumption by the proposal store itself, then the memory consumed by the seen-votes tracking structure is also bounded.
We can also make the case that we do not ever want "dumb relayers." The typical use case for Antelope is with a structured overlay
finalizer(A) <-> relay(A) <-> relay(B) <-> finalizer(B)
, which means relays are part of BP/finalizer infrastructure. Just as a finalizer needs to be in sync to be able to finalize, its associated relay(s) also need(s) to be in sync. A finalizer that is not yet in sync is valid, what that means is that there's one less finalizer voting for a few minutes, but as long as there are notf
finalizers syncing in that manner (org+h
failing finalizers, whereg+h >= f
,g
are finalizers syncing and thus temporarily "failing", andh
are actually perma-failed finalizers) then liveness is not affected. So, the finalizer machine and the relay machine are both conceptually part of the finalizer node, and it is a deployment requirement that both must be in sync for the "finalizer" entity as a whole to work and not be actively counting towardsf
.Finally, this patch does not address connection grief, such as receiving a stream of dummy messages or invalid signatures from a peer. It is possible that we do not even want to address these cases, but this is to be determined later. It is possible that we don't even want to track simply discarded (irrelevant) messages because (a) they are a natural outcome of the protocol and (b) detecting "a million irrelevant messages per second" is an attack you can detect even at network monitoring level / firewall level, and perhaps shouldn't even be the responsibility of
nodeos
. As for invalid signatures, a case can be made for logging those and if there's too many of them (e.g. 100 lifetime) it is possible that adding some code to disconnect the peer in those cases would make sense and pay off, even if the attack vector is pretty esoteric. In any case, message propagation would not happen and a node would not be ever leveraged to relay spam (especially under Solution 2).EDIT: Non-leaders propagating votes only for known proposals can be justified by the protocol being a complete flood-fill of the network. This is a gossip protocol with a fanout probability of 100%, which both works if given an arbitrary topology, and is particularly fine in the typical (expected) topology where there's a ring of BP relayers that are fully connected to each other at the middle of the topology, and one connection between the relayer and its backing finalizer (maybe a few more connections inside the BP infrastructure, but no crazy fanouts). Given that a gossip 100% protocol minimizes latency by definition, it can be assumed that when a proposal is sent by the leader-finalizer, that it will reach every voter-finalizer with minimal latency. Since a vote has to exit the finalizer through its relay to reach the other finalizer's relays, it can be argued that at least in the expected (regular) topology, it is extremely likely that a vote by a finalizer will only reach the relays of other finalizers and other finalizers after they have seen the proposal. This is given by the expected high connectivity between all relays at the core infrastructure, which minimize the latency of a proposal reaching every relay irrespective of the latency between the relay of the finalizer sending the proposal and any other relay, as the proposal will find its way by the broadcast done by every other relay. And finally, dropping votes (which should not usually happen) because you haven't seen the proposal yet is equivalent to dropping messages in transit, which should not affect the safety of any correct SMR protocol. And finally, it should not affect liveness either, as you still need to fail
f
finalizers to begin to cause any trouble.