Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VirtIO network driver peformance. #113

Open
Kswin01 opened this issue May 14, 2024 · 2 comments
Open

VirtIO network driver peformance. #113

Kswin01 opened this issue May 14, 2024 · 2 comments

Comments

@Kswin01
Copy link
Contributor

Kswin01 commented May 14, 2024

Whilst testing the VirtIO net driver with the UDP echo socket, we ended up stalling the driver once requested through-puts reached approximately 600-700 Mbps and with 100,00 samples. It is able to match the requested through-put up until this limit. On inspection we stopped receiving IRQ's from QEMU before we completed the ipbench run.

On testing with the TCP echo server, we don't stall the driver when tested up to 1 Gbps and with 200,000 samples.

@Ivan-Velickovic
Copy link
Collaborator

Noticed by, @alwin-joshy, there is an overflow in the ARM timer driver here:

return (ncycles * NS_IN_S) / hz;
.

This may be why we could never finish an echo server run. Once we fix the bug we should re-run and see if anything has changed with the echo server.

@Ivan-Velickovic
Copy link
Collaborator

With #266, the UDP numbers are:

100000000,99994285,99999285,1472,86,494,3460,228.74,469 ,0,0
200000000,199984546,199998544,1472,333,641,5116,289.10,598 ,0,0
300000000,299946307,299997298,1472,306,643,6281,316.94,593 ,0,0
400000000,399883729,399999695,1472,360,726,2909,258.85,680 ,0,0
500000000,499748131,499998005,1472,378,841,4086,319.11,827 ,0,0
600000000,599667168,599996985,1472,391,861,2883,299.64,840 ,0,0
700000000,699658013,699993849,1472,388,1352,7841,712.73,1232 ,0,0
800000000,799742474,799998391,1472,398,1636,7882,856.23,1471 ,0,0
900000000,896201645,899992578,1472,509,3952,9586,1290.61,3904 ,0,0
1000000000,992443741,999996237,1472,2725,12128,37066,6147.70,10852 ,0,0

and the TCP numbers are:

100000000,99996501,99999501,1472,127,510,7637,207.81,532 ,0,0
200000000,199995822,199999822,1472,172,547,2895,176.10,537 ,0,0
300000000,299969878,299999875,1472,237,636,8097,366.90,607 ,0,0
400000000,399958428,399998424,1472,294,858,7730,713.75,742 ,0,0
500000000,499837964,499997912,1472,747,1762,6510,908.70,1485 ,0,0
600000000,388616724,397041935,1472,1235,122060,1546465,209040.68,97280 ,0,0
700000000,508475710,526341004,1472,1142,151963,169546,29879.47,160668 ,0,0
800000000,507738856,519183290,1472,1268,101703,113283,14023.54,104440 ,0,0
900000000,504041081,515936451,1472,1265,105637,120383,13518.83,107981 ,0,0
1000000000,508010461,520256053,1472,1430,109043,123426,13338.07,111302 ,0,0

Not sure why TCP is only at half a gigabit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants