Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issues #25

Open
roderik opened this issue May 17, 2017 · 8 comments
Open

Performance issues #25

roderik opened this issue May 17, 2017 · 8 comments

Comments

@roderik
Copy link

roderik commented May 17, 2017

I'm running a quite active network (20tx/block, 10s blocktime, 124k blocks) and the explorer has died completely.

The log is full of the following, but no more pages are being served.

block_tx 124391 7697233
block_tx 124391 7697234
block_tx 124391 7697235
block_tx 124391 7697236
block_tx 124391 7697237
block_tx 124391 7697238
block_tx 124391 7697239
commit
block_tx 124392 7697240
block_tx 124392 7697241
block_tx 124392 7697242
block_tx 124392 7697243
block_tx 124392 7697244
block_tx 124392 7697245
block_tx 124392 7697246
block_tx 124392 7697247
block_tx 124392 7697248
block_tx 124392 7697249
block_tx 124392 7697250
block_tx 124392 7697251
block_tx 124392 7697252
block_tx 124392 7697253
block_tx 124392 7697254
block_tx 124392 7697255
block_tx 124392 7697256
block_tx 124392 7697257
block_tx 124392 7697258
block_tx 124392 7697259
commit
block_tx 124393 7697260
block_tx 124393 7697261
block_tx 124393 7697262
block_tx 124393 7697263
block_tx 124393 7697264
block_tx 124393 7697265
block_tx 124393 7697266
block_tx 124393 7697267
block_tx 124393 7697268
block_tx 124393 7697269
block_tx 124393 7697270
block_tx 124393 7697271
block_tx 124393 7697272
block_tx 124393 7697273
block_tx 124393 7697274
block_tx 124393 7697275
block_tx 124393 7697276
block_tx 124393 7697277
block_tx 124393 7697278
block_tx 124393 7697279
block_tx 124393 7697280
block_tx 124393 7697281
block_tx 124393 7697282
block_tx 124393 7697283
block_tx 124393 7697284
block_tx 124393 7697285
block_tx 124393 7697286
block_tx 124393 7697287
block_tx 124393 7697288
block_tx 124393 7697289
block_tx 124393 7697290
block_tx 124393 7697291
block_tx 124393 7697292
block_tx 124393 7697293
block_tx 124393 7697294
block_tx 124393 7697295
block_tx 124393 7697296
block_tx 124393 7697297
block_tx 124393 7697298
block_tx 124393 7697299
block_tx 124393 7697300
block_tx 124393 7697301
block_tx 124393 7697302
block_tx 124393 7697303
block_tx 124393 7697304
block_tx 124393 7697305
block_tx 124393 7697306
block_tx 124393 7697307
block_tx 124393 7697308
block_tx 124393 7697309
block_tx 124393 7697310
block_tx 124393 7697311
block_tx 124393 7697312
block_tx 124393 7697313
block_tx 124393 7697314
block_tx 124393 7697315
block_tx 124393 7697316
block_tx 124393 7697317
commit

curl -i http://localhost:2750 just stalls

The explorer just goes to 100% CPU on one core and stays there indefinitely.

screen shot 2017-05-17 at 08 27 00

Any clue on how to revive the explorer?

@gidgreen
Copy link
Contributor

Do you have some Python tool that lets you trace which piece of code it's stuck on?

@roderik
Copy link
Author

roderik commented May 17, 2017

I can install and configure anything we need to get to the bottom of this, but I have no clue on the python ecosystem.

An educated guess, all these transactions are in one stream. Abe can handle the bitcoin blockchain with 90gb even on sqlite. So probably in the additions for streams that are triggered for the homepage.

@gidgreen
Copy link
Contributor

You should be able to use the basic pdb Python debugger – see the documentation here:

https://docs.python.org/2/library/pdb.html

We haven't tried it yet, but you should be able to add an import pdb; pdb.set_trace() at the top of the main Mce/abe.py file, then re-run the installation instructions in the Explorer README, then use the continue command in the debugger to let the Explorer start running, then when it looks stuck, use the where or list commands to see which piece of code it's stuck on.

@bitcartel
Copy link
Contributor

@roderik Are you using dummy/test data? If yes, maybe you could share all the chain and explorer data so we can try and replicate.

@easeev
Copy link

easeev commented Sep 20, 2019

Have a similar issue.

Debugged a little:

2019-09-20 10:11:31.953006
getting num_txs
2019-09-20 10:11:31.985604
getting num_addresses
2019-09-20 10:16:05.755095 <- took almost 5 mins
getting num_peers
2019-09-20 10:16:05.821592
getting num_assets
2019-09-20 10:16:05.876624
getting num_streams
2019-09-20 10:16:21.742359
got all nums
2019-09-20 10:16:21.743385
getting mempool
2019-09-20 10:16:21.791694
getting recenttx
2019-09-20 10:17:47.790923 <- took more than 1 min
getting sorted_mempool
127.0.0.1 - - [20/Sep/2019 10:17:47] "GET / HTTP/1.1" 200 6175

These queries look suboptimal when you have a significant amount of transactions (as any DISTINCT query on a complex table/view without proper indexes would be):

SELECT COUNT(DISTINCT(pubkey_hash)) FROM txout_detail WHERE chain_id = ?

SELECT DISTINCT tx_hash
FROM txout_detail
WHERE chain_id=? AND pubkey_id != ?
ORDER BY block_height DESC, tx_id DESC
LIMIT ?

In my case it's actually not that large but already quite problematic:

sqlite> select count(*) from txout_detail;
1758971

@easeev
Copy link

easeev commented Oct 11, 2019

Did anyone have time to look at this issue? It looks critical for networks with significant number of transactions. @gidgreen @bitcartel

@gidgreen
Copy link
Contributor

This should be fixable by adding the appropriate indexes at the time of table creation. Do you want to try to add these indexes during initialization in DataStore.py, restarting the Explorer afresh, and confirming?

@easeev
Copy link

easeev commented Dec 19, 2019

Adding indexes didn't help. Implemented a workaround by using more optimal queries: chainstack#4.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants