-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate Max Throughput for Read Only Transactions #1634
Comments
Runnning |
What was |
@linh2931 -- Is there a value you'd recommend for |
32 threads * 60ms * 2 blocks_per_second = 3840ms. |
Very strange, this almost seems like something is limiting it to Ran:
|
If only using 1 thread then that is 60ms * 2 blocks = 120ms. 120ms / 91 = 1.3ms per read-only trx. Seems about right. |
We recommended 165000 for API operators. But as Kevin and Peter noted, something else must gone wrong. I will take a look at this too. |
Peter showed me an example trace from one of these and it was 27ms elapsed time. Sure seems like something is causing these to only run on one thread or preventing them from running in parallel. |
|
I can only get to 91 TPS on main branch too. I am going to run on selected PR branches to pinpoint which one causes the problem. |
Something odd is happening. If you run
This is with: Also for one sample only 141 out of 9400 read-only trxs ran off the main thread. Clearly it is switching way too often. Likely it thinks there are no read-only trxs to execute. |
On Peter's machine the average elapsed time for the read-only trx test case was 19.5us. That is a rather small amount of time. It could be our queue contention and scheduling overhead just doesn't have time keep anything queued. Also the main thread is setup to have a higher priority, so maybe it just is picking up all the work. To test this possibility we need a much heavier (slower) read-only trx test. Can we modify the test read-only trx to loop for say something like 10ms to see if that provides expected results? |
I hardcoded number of read-only threads to 0; I could still only get 91 TPS. Something might be wrong with performance harness reporting? |
Ha! |
With the changes proposed in #1637 I was able to get the following results:
running: |
As a further data point, results of running a performance run with transfer transactions, instead of read-only, I saw similar performance on my machine...which suggests something is keeping read-only from truly taking advantage of having extra threads available.
running: |
We have a good perf env setup with github runner. Closing. Issue #1662 tracks any improvements in code base. |
Summary
Running the performance load test only able to get Max Read Only Tps of 91. Ran an API node with 32 read only threads.
The load test tried the following tps
Nodeos Version
Running on ubuntu 22.04 with Nodeos build from tag
API-early-v5.0-b73c28d51
Read Only Transaction
Command
Load
Load was almost always less then one.
The text was updated successfully, but these errors were encountered: