This document is the specification for the sub-protocol that supports on-demand availability of Ethereum execution chain history data.
The chain history network is a Kademlia DHT that uses the Portal Wire Protocol to establish an overlay network on top of the Discovery v5 protocol.
Execution chain history data consists of historical block headers, block bodies (transactions, ommers and withdrawals) and block receipts.
In addition, the chain history network provides block number to historical block header lookups.
- Block headers
- Block bodies
- Transactions
- Ommers
- Withdrawals
- Receipts
The network supports the following mechanisms for data retrieval:
- Block header by block header hash
- Block header by block number
- Block body by block header hash
- Block receipts by block header hash
This sub-protocol does not support retrieval of transactions by hash, only the full set of transactions for a given block. See the "Canonical Transaction Index" sub-protocol of the Portal Network for more information on how the portal network implements lookup of transactions by their individual hashes.
The history network uses the stock XOR distance metric defined in the portal wire protocol specification.
The history network uses the SHA256 Content ID derivation function from the portal wire protocol specification.
The Portal wire protocol is used as wire protocol for the history network.
As specified in the Protocol identifiers section of the Portal wire protocol, the protocol
field in the TALKREQ
message MUST contain the value of 0x500B
.
The history network supports the following protocol messages:
Ping
-Pong
Find Nodes
-Nodes
Find Content
-Found Content
Offer
-Accept
In the history network the custom_payload
field of the Ping
and Pong
messages is the serialization of an SSZ Container specified as custom_data
:
custom_data = Container(data_radius: uint256)
custom_payload = SSZ.serialize(custom_data)
The history network uses the standard routing table structure from the Portal Wire Protocol.
The history network includes one additional piece of node state that should be tracked. Nodes must track the data_radius
from the Ping and Pong messages for other nodes in the network. This value is a 256 bit integer and represents the data that a node is "interested" in. We define the following function to determine whether node in the network should be interested in a piece of content.
interested(node, content) = distance(node.id, content.id) <= node.radius
A node is expected to maintain radius
information for each node in its local node table. A node's radius
value may fluctuate as the contents of its local key-value store change.
A node should track their own radius value and provide this value in all Ping or Pong messages it sends to other nodes.
We define the following constants which are used in the various data type definitions.
MAX_TRANSACTION_LENGTH = 2**24 # ~= 16 million
# Maximum transaction body length is achieved by filling calldata with 0's
# until the block limit of (currently) 30M gas is reached.
# At a gas cost of 4 per 0-byte, that produces a 7.5MB transaction. We roughly
# double that size to a maximum of >16 million for some headroom. Note that
# EIP-4488 would put a roughly 1MB limit on transaction length, effectively. So
# increases are not planned (instead, the opposite).
MAX_TRANSACTION_COUNT = 2**14 # ~= 16k
# 2**14 simple transactions would use up >340 million gas at 21k gas each.
# Current gas limit tops out at 30 million gas.
MAX_RECEIPT_LENGTH = 2**27 # ~= 134 million
# Maximum receipt length is logging a bunch of data out, currently at a cost of
# 8 gas per byte. Since that is double the cost of 0 calldata bytes, the
# maximum size is roughly half that of the transaction: 3.75 million bytes.
# But there is more reason for protocol devs to constrain the transaction length,
# and it's not clear what the practical limits for receipts are, so we should add more buffer room.
# Imagine the cost drops by 2x and the block gas limit goes up by 8x. So we add 2**4 = 16x buffer.
MAX_HEADER_LENGTH = 2**11 # = 2048
# Maximum header length is fairly stable at about 500 bytes. It might change at
# the merge, and beyond. Since the length is relatively small, and the future
# of the format is unclear to me, I'm leaving more room for expansion, and
# setting the max at about 2 kilobytes.
MAX_ENCODED_UNCLES_LENGTH = MAX_HEADER_LENGTH * 2**4 # = 2**17 ~= 32k
# Maximum number of uncles is currently 2. Using 16 leaves some room for the
# protocol to increase the number of uncles.
MAX_WITHDRAWAL_COUNT = 16
# Number sourced from consensus specs
# https://github.com/ethereum/consensus-specs/blob/f7352d18cfb91c58b1addb4ea509aedd6e32165c/presets/mainnet/capella.yaml#L12
# MAX_WITHDRAWAL_COUNT = MAX_WITHDRAWALS_PER_PAYLOAD
WITHDRAWAL_LENGTH = 64
# Withdrawal: index (u64), validator_index (u64), address, amount (u64)
# - 8 + 8 + 20 + 8 = 44 bytes
# - allow extra space for rlp encoding overhead
SHANGHAI_TIMESTAMP = 1681338455
# Number sourced from EIP-4895
The encoding choices generally favor easy verification of the data, minimizing decoding. For example:
keccak(encoded_uncles) == header.uncles_hash
- Each
encoded_transaction
can be inserted into a trie to compare to theheader.transactions_root
- Each
encoded_receipt
can be inserted into a trie to compare to theheader.receipts_root
Combining all of the block body in RLP, in contrast, would require that a validator loop through each receipt/transaction and re-rlp-encode it, but only if it is a legacy transaction.
# Content types
HistoricalHashesAccumulatorProof = Vector[Bytes32, 15]
BlockHeaderProof = Union[None, HistoricalHashesAccumulatorProof]
BlockHeaderWithProof = Container(
header: ByteList[MAX_HEADER_LENGTH], # RLP encoded header in SSZ ByteList
proof: BlockHeaderProof
)
Note: The
BlockHeaderProof
allows to provide headers without a proof (None
). For pre-merge headers, clients SHOULD NOT accept headers without a proof as there is theHistoricalHashesAccumulatorProof
solution available. For post-merge headers, there is currently no proof solution and clients MAY accept headers without a proof.
# Content and content key
block_header_key = Container(block_hash: Bytes32)
selector = 0x00
block_header_with_proof = BlockHeaderWithProof(header: rlp.encode(header), proof: proof)
content = SSZ.serialize(block_header_with_proof)
content_key = selector + SSZ.serialize(block_header_key)
# Content and content key
block_number_key = Container(block_number: uint64)
selector = 0x03
block_header_with_proof = BlockHeaderWithProof(header: rlp.encode(header), proof: proof)
content = SSZ.serialize(block_header_with_proof)
content_key = selector + SSZ.serialize(block_number_key)
After the addition of withdrawals
to the block body in the EIP-4895,
clients need to support multiple encodings for the block body content type. For the time being,
since a client is required for block body validation it is recommended that clients implement
the following sequence to decode & validate block bodies.
- Receive raw block body content value.
- Fetch respective header from the network.
- Compare header timestamp against
SHANGHAI_TIMESTAMP
to determine what encoding scheme the block body uses. - Decode the block body using either pre-shanghai or post-shanghai encoding.
- Validate the decoded block body against the roots in the header.
block_body_key = Container(block_hash: Bytes32)
selector = 0x01
# Transactions
transactions = List(ssz_transaction, limit=MAX_TRANSACTION_COUNT)
ssz_transaction = ByteList[MAX_TRANSACTION_LENGTH](encoded_transaction)
encoded_transaction =
if transaction.is_typed:
return transaction.type_byte + rlp.encode(transaction)
else:
return rlp.encode(transaction)
# Uncles
uncles = ByteList[MAX_ENCODED_UNCLES_LENGTH](encoded_uncles)
encoded_uncles = rlp.encode(list_of_uncle_headers)
# Withdrawals
withdrawals = List(ssz_withdrawal, limit=MAX_WITHDRAWAL_COUNT)
ssz_withdrawal = ByteList[MAX_WITHDRAWAL_LENGTH](encoded_withdrawal)
encoded_withdrawal = rlp.encode(withdrawal)
# Block body
pre_shanghai_body = Container(
transactions: transactions,
uncles: uncles
)
post_shanghai_body = Container(
transactions: transactions,
uncles: uncles,
withdrawals: withdrawals
)
# Encoded content
content = SSZ.serialize(pre_shanghai_body | post_shanghai_body)
content_key = selector + SSZ.serialize(block_body_key)
Note 1: The type-specific transactions encoding might be different for future transaction types, but this content encoding is agnostic to the underlying transaction encodings.
Note 2: The list_of_uncle_headers
refers to the array of uncle headers defined in the devp2p spec.
receipt_key = Container(block_hash: Bytes32)
selector = 0x02
receipts = List(ssz_receipt, limit=MAX_TRANSACTION_COUNT)
ssz_receipt = ByteList[MAX_RECEIPT_LENGTH](encoded_receipt)
encoded_receipt =
if receipt.is_typed:
return type_byte + rlp.encode(receipt)
else:
return rlp.encode(receipt)
content = SSZ.serialize(receipts)
content_key = selector + SSZ.serialize(receipt_key)
Note: The type-specific receipts encoding might be different for future receipt types, but this content encoding is agnostic to the underlying receipt encodings.
The "Historical Hashes Accumulator" is based on the double-batched merkle log accumulator that is currently used in the beacon chain. This data structure is designed to allow nodes in the network to "forget" the deeper history of the chain, while still being able to reliably receive historical headers with a proof that the received header is indeed from the canonical chain (as opposed to an uncle mined at the same block height). This data structure is only used for pre-merge blocks.
The accumulator is defined as an SSZ data structure with the following schema:
EPOCH_SIZE = 8192 # blocks
MAX_HISTORICAL_EPOCHS = 2048
# An individual record for a historical header.
HeaderRecord = Container[block_hash: Bytes32, total_difficulty: uint256]
# The records of the headers from within a single epoch
EpochRecord = List[HeaderRecord, limit=EPOCH_SIZE]
HistoricalHashesAccumulator = Container[
historical_epochs: List[Bytes32, limit=MAX_HISTORICAL_EPOCHS],
current_epoch: EpochRecord,
]
The algorithm for building the accumulator is as follows.
def update_accumulator(accumulator: HistoricalHashesAccumulator, new_block_header: BlockHeader) -> None:
# get the previous total difficulty
if len(accumulator.current_epoch) == 0:
# genesis
last_total_difficulty = 0
else:
last_total_difficulty = accumulator.current_epoch[-1].total_difficulty
# check if the epoch accumulator is full.
if len(accumulator.current_epoch) == EPOCH_SIZE:
# compute the final hash for this epoch
epoch_hash = hash_tree_root(accumulator.current_epoch)
# append the hash for this epoch to the list of historical epochs
accumulator.historical_epochs.append(epoch_hash)
# initialize a new empty epoch
accumulator.current_epoch = []
# construct the concise record for the new header and add it to the current epoch.
header_record = HeaderRecord(new_block_header.hash, last_total_difficulty + new_block_header.difficulty)
accumulator.current_epoch.append(header_record)
The HistoricalHashesAccumulator
is fully build and frozen when the last block before TheMerge/Paris fork is added and the last incomplete EpochRecord
its hash_tree_root
is added to the historical_epochs
.
The network provides no mechanism for acquiring the fully build HistoricalHashesAccumulator
. Clients are encouraged to solve this however they choose, with the suggestion that they include a frozen copy of the accumulator at the point of TheMerge within their client code, and provide a mechanism for users to override this value if they so choose. The hash_tree_root
of the HistoricalHashesAccumulator
is
defined in EIP-7643.
The HistoricalHashesAccumulatorProof
is a Merkle proof as specified in the
SSZ Merke proofs specification.
It is a Merkle proof for the BlockHeader
's block hash on the relevant
EpochRecord
object. The selected EpochRecord
must be the one where
the BlockHeader
's block hash is part of. The GeneralizedIndex
selected must
match the leave of the EpochRecord
merkle tree which holds the
BlockHeader
's block hash.
An HistoricalHashesAccumulatorProof
for a specific BlockHeader
can be used to verify that
this BlockHeader
is part of the canonical chain. This is done by verifying the
Merkle proof with the BlockHeader
's block hash as leave and the
EpochRecord
digest as root. This digest is available in the
HistoricalHashesAccumulator
.
As the HistoricalHashesAccumulator
only accounts for blocks pre-merge, this proof can
only be used to verify blocks pre-merge.