Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf(trie): parallel rlp node updates in sparse trie #13251

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

shekhirin
Copy link
Collaborator

@shekhirin shekhirin commented Dec 9, 2024

Makes the RevealedSparseTrie::rlp_node take a non-mutable reference to self and return the list of updates that are need to be applied to the trie nodes. This allows us to run multiple rlp_node in parallel in the RevealedSparseTrie::update_rlp_node_level, because we gather the nodes at the provided level and calculate their hashes independently from each other.

@shekhirin shekhirin added C-perf A change motivated by improving speed, memory usage or disk footprint A-trie Related to Merkle Patricia Trie implementation labels Dec 13, 2024
@shekhirin shekhirin force-pushed the alexey/sparse-trie-parallel branch from 75d8c87 to a50b561 Compare December 15, 2024 13:04
@shekhirin shekhirin marked this pull request as ready for review December 15, 2024 13:04
@shekhirin shekhirin marked this pull request as draft December 16, 2024 07:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-trie Related to Merkle Patricia Trie implementation C-perf A change motivated by improving speed, memory usage or disk footprint
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant