-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[5.0] Replaced cpu-effort-percent
with produce-block-offset-ms
#1800
Changes from 3 commits
0c5ff7e
ffbcbd9
5ca96db
7bd9850
694b884
fbc83d1
48422bf
17266b1
54e42bc
1cc8b3d
a336ef2
440d33a
015fc81
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4,87 +4,61 @@ content_title: Block Production Explained | |
|
||
For simplicity of the explanation let's consider the following notations: | ||
|
||
m = max_block_cpu_usage | ||
* `r` = `producer_repetitions = 12` (hard-coded value) | ||
* `m` = `max_block_cpu_usage` (on-chain consensus value) | ||
* `n` = `max_block_net_usage` (on-chain consensus value) | ||
* `t` = `block-time` | ||
* `e` = `produce-block-offset-ms` (nodeos configuration) | ||
* `w` = `block-time-interval = 500ms` (hard-coded value) | ||
* `a` = `produce-block-early-amount = w - (w - (e / r)) = e / r ms` (how much to release each block of round early by) | ||
* `l` = `produce-block-time = t - a` | ||
* `p` = `produce block time window = w - a` (amount of wall clock time to produce a block) | ||
* `c` = `billed_cpu_in_block = minimum(m, w - a)` | ||
* `n` = `network tcp/ip latency` | ||
* `h` = `block header validation time ms` | ||
|
||
Peer validation for similar hardware/version/config will be <= `m` | ||
|
||
**Let's consider for exemplification the following two BPs and their network topology as depicted in the below diagram** | ||
|
||
t = block-time | ||
|
||
e = last-block-cpu-effort-percent | ||
|
||
w = block_time_interval = 500ms | ||
|
||
a = produce-block-early-amount = (w - w*e/100) ms | ||
|
||
p = produce-block-time; p = t - a | ||
|
||
c = billed_cpu_in_block = minimum(m, w - a) | ||
|
||
n = network tcp/ip latency | ||
|
||
peer validation for similar hardware/eosio-version/config will be <= m | ||
|
||
**Let's consider for exemplification the following four BPs and their network topology as depicted in below diagram** | ||
|
||
|
||
```dot-svg | ||
#p2p_local_chain_prunning.dot - local chain prunning | ||
# | ||
#notes: * to see image copy/paste to https://dreampuf.github.io/GraphvizOnline | ||
# * image will be rendered by gatsby-remark-graphviz plugin in eosio docs. | ||
|
||
digraph { | ||
newrank=true #allows ranks inside subgraphs (important!) | ||
compound=true #allows edges connecting nodes with subgraphs | ||
graph [rankdir=LR] | ||
node [style=filled, fillcolor=lightgray, shape=square, fixedsize=true, width=.55, fontsize=10] | ||
edge [dir=both, arrowsize=.6, weight=100] | ||
splines=false | ||
|
||
subgraph cluster_chain { | ||
label="Block Producers Peers"; labelloc="b" | ||
graph [color=invis] | ||
b0 [label="...", color=invis, style=""] | ||
b1 [label="BP-A"]; b2 [label="BP-A\nPeer"]; b3 [label="BP-B\nPeer"]; b4 [label="BP-B"] | ||
b5 [label="...", color=invis, style=""] | ||
b0 -> b1 -> b2 -> b3 -> b4 -> b5 | ||
} //cluster_chain | ||
|
||
} //digraph | ||
``` | ||
+------+ +------+ +------+ +------+ | ||
-->| BP-A |---->| BP-A |------>| BP-B |---->| BP-B | | ||
+------+ | Peer | | Peer | +------+ | ||
+------+ +------+ | ||
``` | ||
|
||
`BP-A` will send block at `p` and, | ||
|
||
`BP-B` needs block at time `t` or otherwise will drop it. | ||
`BP-A` will send block at `l` and, `BP-B` needs block at time `t` or otherwise will drop it. | ||
|
||
If `BP-A`is producing 12 blocks as follows `b(lock) at t(ime) 1`, `bt 1.5`, `bt 2`, `bt 2.5`, `bt 3`, `bt 3.5`, `bt 4`, `bt 4.5`, `bt 5`, `bt 5.5`, `bt 6`, `bt 6.5` then `BP-B` needs `bt 6.5` by time `6.5` so it has `.5` to produce `bt 7`. | ||
|
||
Please notice that the time of `bt 7` minus `.5` equals the time of `bt 6.5` therefore time `t` is the last block time of `BP-A` and when `BP-B` needs to start its first block. | ||
|
||
## Example 1 | ||
`BP-A` has 50% e, m = 200ms, c = 200ms, n = 0ms, a = 250ms: | ||
`BP-A` sends at (t-250ms) <-> `BP-A-Peer` processes for 200ms and sends at (t - 50ms) <-> `BP-B-Peer` processes for 200ms and sends at (t + 150ms) <-> arrive at `BP-B` 150ms too late. | ||
|
||
## Example 2 | ||
`BP-A` has 40% e and m = 200ms, c = 200ms, n = 0ms, a = 300ms: | ||
(t-300ms) <-> (+200ms) <-> (+200ms) <-> arrive at `BP-B` 100ms too late. | ||
A block is produced and sent when either it reaches `m` or `n` or `p`. | ||
|
||
## Example 3 | ||
`BP-A` has 30% e and m = 200ms, c = 150ms, n = 0ms, a = 350ms: | ||
(t-350ms) <-> (+150ms) <-> (+150ms) <-> arrive at `BP-B` with 50ms to spare. | ||
Starting in Leap 4.0, blocks are propagated after block header validation. This means instead of `BP-A Peer` & `BP-B Peer` taking `m` time to validate and forward a block it only takes a small number of milliseconds to verify the block header and then forward the block. | ||
|
||
## Example 4 | ||
`BP-A` has 25% e and m = 200ms, c = 125ms, n = 0ms, a = 375ms: | ||
(t-375ms) <-> (+125ms) <-> (+125ms) <-> arrive at `BP-B` with 125ms to spare. | ||
Starting in Leap 5.0, blocks in a round are started immediately after the completion of the previous block. Before 5.0, blocks were always started on `w` intervals and a node would "sleep" between blocks if needed. In 5.0, the "sleeps" are all moved to the end of the block production round. | ||
greg7mdp marked this conversation as resolved.
Show resolved
Hide resolved
greg7mdp marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
## Example 5 | ||
`BP-A` has 10% e and m = 200ms, c = 50ms, n = 0ms, a = 450ms: | ||
(t-450ms) <-> (+50ms) <-> (+50ms) <-> arrive at `BP-B` with 350ms to spare. | ||
## Example 1: block arrives 110ms early (zero network latency between BP and immediate peer) | ||
* `BP-A` has e = 120, n = 0ms, h = 5ms, a = 10ms | ||
* `BP-A` sends b1 at `t1-10ms` => `BP-A-Peer` processes `h=5ms`, sends at `t-5ms` => `BP-B-Peer` processes `h=5ms`, sends at `t-0ms` => arrives at `BP-B` at `t`. | ||
* `BP-A` starts b2 at `t1-10ms`, sends b2 at `t2-20ms` => `BP-A-Peer` processes `h=5ms`, sends at `t2-15ms` => `BP-B-Peer` processes `h=5ms`, sends at `t2-10ms` => arrives at `BP-B` at `t2-10ms`. | ||
greg7mdp marked this conversation as resolved.
Show resolved
Hide resolved
|
||
* `BP-A` starts b3 at `t2-20ms`, ... | ||
* `BP-A` starts b12 at `t11-110ms`, sends b12 at `t12-120ms` => `BP-A-Peer` processes `h=5ms`, sends at `t12-115ms` => `BP-B-Peer` processes `h=5ms`, sends at `t12-110ms` => arrives at `BP-B` at `t12-110ms` | ||
|
||
## Example 6 | ||
`BP-A` has 10% e and m = 200ms, c = 50ms, n = 15ms, a = 450ms: | ||
(t-450ms) <- +15ms -> (+50ms) <- +15ms -> (+50ms) <- +15ms -> `BP-B` <-> arrive with 305ms to spare. | ||
## Example 2: block arrives 80ms early (zero network latency between BP and immediate peer) | ||
* `BP-A` has e = 240, n = 150ms, h = 5ms, a = 20ms | ||
* `BP-A` sends b1 at `t1-20ms` => `BP-A-Peer` processes `h=5ms`, sends at `t-15ms` =(150ms)> `BP-B-Peer` processes `h=5ms`, sends at `t+140ms` => arrives at `BP-B` at `t+140ms`. | ||
* `BP-A` starts b2 at `t1-20ms`, sends b2 at `t2-40ms` => `BP-A-Peer` processes `h=5ms`, sends at `t2-35ms` =(150ms)> `BP-B-Peer` processes `h=5ms`, sends at `t2+120ms` => arrives at `BP-B` at `t2+120ms`. | ||
* `BP-A` starts b3 at `t2-40ms`, ... | ||
* `BP-A` starts b12 at `t11-220ms`, sends b12 at `t12-240ms` => `BP-A-Peer` processes `h=5ms`, sends at `t12-235ms` =(150ms)> `BP-B-Peer` processes `h=5ms`, sends at `t12-80ms` => arrives at `BP-B` at `t12-80ms` | ||
|
||
## Example 7 | ||
Example world-wide network:`BP-A`has 10% e and m = 200ms, c = 50ms, n = 15ms/250ms, a = 450ms: | ||
(t-450ms) <- +15ms -> (+50ms) <- +250ms -> (+50ms) <- +15ms -> `BP-B` <-> arrive with 70ms to spare. | ||
## Example 3: block arrives 16ms late and is dropped (zero network latency between BP and immediate peer) | ||
* `BP-A` has e = 204, n = 200ms, h = 10ms, a = 17ms | ||
* `BP-A` sends b1 at `t1-17ms` => `BP-A-Peer` processes `h=10ms`, sends at `t-7ms` =(200ms)> `BP-B-Peer` processes `h=10ms`, sends at `t+203ms` => arrives at `BP-B` at `t+203ms`. | ||
* `BP-A` starts b2 at `t1-17ms`, sends b2 at `t2-34ms` => `BP-A-Peer` processes `h=10ms`, sends at `t2-24ms` =(200ms)> `BP-B-Peer` processes `h=10ms`, sends at `t2+186ms` => arrives at `BP-B` at `t2+186ms`. | ||
* `BP-A` starts b3 at `t2-34ms`, ... | ||
* `BP-A` starts b12 at `t11-187ms`, sends b12 at `t12-204ms` => `BP-A-Peer` processes `h=10ms`, sends at `t12-194ms` =(200ms)> `BP-B-Peer` processes `h=10ms`, sends at `t12+16ms` => arrives at `BP-B` at `t12-16ms` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Shouldn't it be arriving at B at There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes. Thanks. |
||
|
||
Running wasm-runtime=eos-vm-jit eos-vm-oc-enable on relay node will reduce the validation time. |
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -17,7 +17,7 @@ class producer_plugin : public appbase::plugin<producer_plugin> { | |||||
struct runtime_options { | ||||||
std::optional<int32_t> max_transaction_time; | ||||||
std::optional<int32_t> max_irreversible_block_age; | ||||||
std::optional<int32_t> cpu_effort_us; | ||||||
std::optional<int32_t> produce_block_offset_ms; | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd add the comment here as well
Suggested change
|
||||||
std::optional<int32_t> subjective_cpu_leeway_us; | ||||||
std::optional<uint32_t> greylist_limit; | ||||||
}; | ||||||
|
@@ -196,7 +196,7 @@ class producer_plugin : public appbase::plugin<producer_plugin> { | |||||
|
||||||
} //eosio | ||||||
|
||||||
FC_REFLECT(eosio::producer_plugin::runtime_options, (max_transaction_time)(max_irreversible_block_age)(cpu_effort_us)(subjective_cpu_leeway_us)(greylist_limit)); | ||||||
FC_REFLECT(eosio::producer_plugin::runtime_options, (max_transaction_time)(max_irreversible_block_age)(produce_block_offset_ms)(subjective_cpu_leeway_us)(greylist_limit)); | ||||||
FC_REFLECT(eosio::producer_plugin::greylist_params, (accounts)); | ||||||
FC_REFLECT(eosio::producer_plugin::whitelist_blacklist, (actor_whitelist)(actor_blacklist)(contract_whitelist)(contract_blacklist)(action_blacklist)(key_blacklist) ) | ||||||
FC_REFLECT(eosio::producer_plugin::integrity_hash_information, (head_block_id)(integrity_hash)) | ||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why
exemplification
is used here instead of justfor example
is beyond me. Seems like we are trying to sound smart.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd write:
Let's consider the example of the following two BPs and their network topology as depicted in the below diagram