Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rare mempool synch warnings.. not so rare for this 1 user #216

Open
cculianu opened this issue Nov 25, 2023 · 2 comments
Open

Rare mempool synch warnings.. not so rare for this 1 user #216

cculianu opened this issue Nov 25, 2023 · 2 comments
Assignees
Labels
Requires Investigation Not clear if bug here or bug outside of Fulcrum

Comments

@cculianu
Copy link
Owner

cculianu commented Nov 25, 2023

Update on 1.9.6+. Testnet has been running for 2 days w/o issue - no asserts.

I have not seen the asserts metntioned in #141 now that I updated Mainnet. But I do see these variants of Tx dropped out of the mempool.

  1. Single assert (yellow):
<Controller> Processed 1 new block with 4042 txs (6896 inputs, 8813 outputs, 7873 addresses), verified ok.
<Controller> Block height 818273, up-to-date
<SynchMempool> Tx dropped out of mempool (possibly due to RBF): 46fd88359f8a91441fb46a813d97e89c1c4e0196775e23af1adaf473f8d1e3c5 (error response: No such mempool or blockchain transaction. Use gettransaction for wallet transactions.), ignoring mempool tx ...
<Controller> 103155 mempool txs involving 502600 addresses
<Controller> 103731 mempool txs involving 504035 addresses
<Controller> 104345 mempool txs involving 505852 addresses
  1. Triple assert (yellow/red):
<Controller> 106195 mempool txs involving 511583 addresses
<Controller> 106551 mempool txs involving 512882 addresses
<SynchMempool> Synch mempool expected to drop 3217, but in fact dropped 3327 -- retrying getrawmempool
<SyncMempoolPreCache> SynchMempoolTask::Precache::threadFunc: Unable to find prevout 832ac0b71d5fe00baeec6129298e0e55ee73382f327ca292187d3e8a9fa2e286:12 in DB for tx 5b73b9fac71356db34a89c26be19bd3d906c304f7625300dbb3d3b594aebdf90 (possibly a block arrived while synching mempool, will retry)
<Controller> Failed to synch blocks and/or mempool
<Controller> Block height 818274, downloading new blocks ...
<Controller> Processed 1 new block with 3218 txs (7344 inputs, 9206 outputs, 11379 addresses), verified ok.
<Controller> Block height 818274, up-to-date
  1. Quad assert (yellow/red):
<Controller> Block height 818275, up-to-date
<Controller> 105716 mempool txs involving 507956 addresses
<Controller> 106129 mempool txs involving 509332 addresses
<SynchMempool> Synch mempool expected to drop 3285, but in fact dropped 3320 -- retrying getrawmempool
<SyncMempoolPreCache> SynchMempoolTask::Precache::threadFunc: Unable to find prevout 899b2e34b038746c50b01aaaa49a00152ffb3be4d80898fc01f71b5b47ab7340:0 in DB for tx 3c4768ac8131020b1964f43621cd6d59bc5ecef788efe2dfebf3946bd249d449 (possibly a block arrived while synching mempool, will retry)
<SynchMempool> processResults: precache->thread errored out, aborting SynchMempoolTask
<Controller> Failed to synch blocks and/or mempool
<Controller> Block height 818276, downloading new blocks ...
<Controller> Processed 1 new block with 3287 txs (6655 inputs, 7832 outputs, 9150 addresses), verified ok.
<Controller> Block height 818276, up-to-date

Are these normal and expected given the state of the mempool, that is, benign. Or something else?

Thanks

Originally posted by @Francisco-DAnconia in #214 (comment)

@cculianu cculianu added the Requires Investigation Not clear if bug here or bug outside of Fulcrum label Nov 25, 2023
@cculianu cculianu self-assigned this Nov 25, 2023
@cculianu
Copy link
Owner Author

Note to anybody reading this:

The first error (1) above is expected to happen occasionally. It's just because it's impossible to synch the mempool 100% correctly in an atomic fashion since it's fast changing on BTC when mempool is full. Mempool synch is, in other words, "racey" .. as you synch it can change in inconsistent ways (mainly due to RBF, but not only).

The other 2 errors are rarer .. i have never seen them happen .. the fact that they happened for you successively makes me curious as to WHY they happened in such a common fashion.

In all cases the errors are recoverable and Fulcrum eventually settles. I will have to examine the other 2 errors though in more detail.. because something is fishy there.

@Francisco-DAnconia
Copy link

Update on frequency.

Since my comment in #214, I've seen the the "3 assert" variant 5 times and the "4 assert" variant twice (yesterday and today).

However, theses last 7 asserts are not close together at all (wrt time/block#) whereas the first occurrence was within a few blocks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Requires Investigation Not clear if bug here or bug outside of Fulcrum
Projects
None yet
Development

No branches or pull requests

2 participants