-
Notifications
You must be signed in to change notification settings - Fork 700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using warp sync on parachain triggers OOM #5053
Comments
Hey, The problem is that right now we keep the entire state in memory while downloading. For chains that have a big state (not sure what the state size of Astar is), this can lead to OOM on machines with not enough main memory. |
Hmm yeah. I still think it is related to having everything in memory. Did you try to run the node with gdb? To get some stacktrace when it OOMs? |
We haven't but I'll ask our devops team to do that and I'll post the traces here. |
Ty. |
Does this help? We've used this command to generate it: |
@Dinonard but this is not from the point when OOMs. You need to run the node with gdb all the time attached. |
TBH I checked the logs myself and couldn't see anything wrong, but AFAIK this was run immediately when the node started. Let me get back to you. |
I've run it on the same server as before with gdb now properly encapsulating the entire service. Sorry about the missing symbols but I haven't used GDB with any Rust app before. |
Just to share my experiences on the state sync with a chain having a huge state. The subcoin node crashed the first time when importing the downloaded state at a certain height. And then I reran it with the same command syncing the state at the same height, with gdb attached. Unfortunately (or fortunately :P), it didn't crash with gdb. I observed that this successful state importing almost absorbed my entire memory (my machine has 128GiB of memory) at the peak.
|
In light of recent developments, it has become evident that fully syncing to the tip of the Bitcoin network and enabling new nodes to perform fast sync to the latest Bitcoin state is more challenging than initially anticipated, caused by the huge state of UTXO set (over 12GiB). As a result, I propose adjusting the delivery goal for this milestone. The most significant known blocker is paritytech/polkadot-sdk#4. Other underlying issues may also contribute to the difficulty. Recent experiments have shown that fast sync from around block height 580,000 is currently infeasible, succeeding only on machines with 128GiB of memory (paritytech/polkadot-sdk#5053 (comment)), which is impractical for most users. Nevertheless, we have successfully demonstrated that decentralized fast sync is possible within a prototype implementation. While syncing to the Bitcoin network's tip remains a future target, addressing the existing technical challenges will require substantial R&D efforts. We remain committed to exploring potential solutions, including architectural changes and contributing to resolving issue paritytech/polkadot-sdk#4,
In light of recent developments, it has become evident that fully syncing to the tip of the Bitcoin network and enabling new nodes to perform fast sync to the latest Bitcoin state is more challenging than initially anticipated, caused by the huge state of UTXO set (over 12GiB). As a result, I propose adjusting the delivery goal for this milestone. The most significant known blocker is paritytech/polkadot-sdk#4. Other underlying issues may also contribute to the difficulty. Recent experiments have shown that fast sync from around block height 580,000 is currently infeasible, succeeding only on machines with 128GiB of memory (paritytech/polkadot-sdk#5053 (comment)), which is impractical for most users. Nevertheless, we have successfully demonstrated that decentralized fast sync is possible within a prototype implementation. While syncing to the Bitcoin network's tip remains a future target, addressing the existing technical challenges will require substantial R&D efforts. We remain committed to exploring potential solutions, including architectural changes and contributing to resolving issue paritytech/polkadot-sdk#4,
diff --git a/substrate/primitives/trie/src/lib.rs b/substrate/primitives/trie/src/lib.rs
index ef6b6a5743..e0a2cf3b30 100644
--- a/substrate/primitives/trie/src/lib.rs
+++ b/substrate/primitives/trie/src/lib.rs
@@ -296,23 +296,30 @@ where
V: Borrow<[u8]>,
DB: hash_db::HashDB<L::Hash, trie_db::DBValue>,
{
- {
+ // {
let mut trie = TrieDBMutBuilder::<L>::from_existing(db, &mut root)
.with_optional_cache(cache)
.with_optional_recorder(recorder)
.build();
+ tracing::info!("====================== Collecting delta");
let mut delta = delta.into_iter().collect::<Vec<_>>();
+ tracing::info!("====================== Finished Collecting delta: {}", delta.len());
delta.sort_by(|l, r| l.0.borrow().cmp(r.0.borrow()));
+ tracing::info!("====================== Sorted delta");
- for (key, change) in delta {
+ tracing::info!("====================== Starting to write trie, mem usage: {:.2?}GiB", memory_stats::memory_stats().map(|usage| usage.physical_mem as f64 / 1024.0 / 1024.0 / 1024.0));
+ for (index, (key, change)) in delta.into_iter().enumerate() {
match change.borrow() {
Some(val) => trie.insert(key.borrow(), val.borrow())?,
None => trie.remove(key.borrow())?,
};
}
- }
+ tracing::info!("====================== Finished writing delta to trie, mem usage: {:.2?}GiB", memory_stats::memory_stats().map(|usage| usage.physical_mem as f64 / 1024.0 / 1024.0 / 1024.0));
+ drop(trie);
+ // }
+ tracing::info!("====================== End of delta_trie_root, mem usage: {:.2?}GiB", memory_stats::memory_stats().map(|usage| usage.physical_mem as f64 / 1024.0 / 1024.0 / 1024.0));
Ok(root)
}
I added some logging for the memory usage in the block import pipeline. It turned out that https://github.com/subcoin-project/polkadot-sdk/blob/13ca1b64692b05b699f49e729d0522ed4be730b9/substrate/primitives/trie/src/lib.rs#L285 is the culprit. The memory usage surged from 26 GiB to 76 GiB after the trie was built. Importing the same state does not always succeed, if it crashes due to OOM, |
Constructing the entire trie from the state at a specific height in memory seems to be the primary cause of the OOM issue. This is a critical design flaw, in my opinion, especially since the chain state will continue to grow over time. While Polkadot may be fine for now, it will inevitably face the same problem in the long run if we don't address this. Please prioritize pushing this issue forward. @bkchr |
Yeah sounds reasonable. I need to think about on how to improve this, but yeah this should not happen ;) |
Hey @bkchr, I understand this is a non-trivial issue, but I wanted to highlight that it’s a critical blocker for the Subcoin fast sync feature. I'm eager to collaborate closely with the Parity team to help push this forward. Let me know how I can contribute! |
Yeah that would be nice! I looked briefly into it. I still want to solve this with #4 together. I have the following rough plan:
@liuchengxu Do you think that you could start looking into this? I think starting with the db part should be doable in parallel and can be its own pr. |
@bkchr This makes sense to me, I'll look into the part of updating the state directly using the new keys. |
Is there an existing issue?
Experiencing problems? Have you tried our Stack Exchange first?
Description of bug
As the title suggest, using warp sync on parachain causes OOM, crashing the client.
We've had this problem on Astar for a few months, and have recently uplifted to
polkadot-sdk
versionv1.9.0
but are still seeing the problem. There are no outstanding traces in the log, it just explodes at some point.There's an issue opened in our repo, AstarNetwork/Astar#1110, with steps to reproduce as well as images of resource consumption before the crash.
We haven't been able to find similar issues or discussion related to the topic.
Steps to reproduce
Run latest Astar node as described in the linked issue.
The text was updated successfully, but these errors were encountered: