Skip to content

Commit

Permalink
test: performance tests for v1.9.2
Browse files Browse the repository at this point in the history
Signed-off-by: ywc689 <[email protected]>
  • Loading branch information
ywc689 committed Jul 19, 2022
1 parent 793e11b commit 9be4822
Show file tree
Hide file tree
Showing 10 changed files with 15,320 additions and 5 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -253,6 +253,8 @@ Our test shows the forwarding speed (pps) of DPVS is several times than LVS and

![performance](./pic/performance.png)

Click [here](./test/release/v1.9.2/performance.md) for the lastest performance data.

# License

Please refer to the [License](./LICENSE.md) file for details.
Expand Down
47 changes: 43 additions & 4 deletions src/VERSION
Original file line number Diff line number Diff line change
@@ -1,11 +1,50 @@
#!/bin/sh -
# program: dpvs
# Jan 4, 2022
#
# Rebase v1.8.12 to v1.9.0
# Jul 16, 2022
#

export VERSION=1.9
export RELEASE=1.alpha
export RELEASE=2

echo $VERSION-$RELEASE

* Dpvs: fix a crash problem when timer scheduled from within another timer's callback
* Dpvs: fix checksum problem caused by incorrect netif interface
* Toa: support linux kernel verison v5.7.0+
* Dpvs: make debug fields in dp_vs_conn configurable for memory optimization
* Dpvs: fix weight ratio update problem in conhash schedule algorithm
* Dpvs: fix icmp6 checksum error caused by incorrect payload length endian in ipv6 header
* Dpvs: Add ipset framework and 12 set types(v1.8.12)
* Dpvs: Add l2/l3/l4 header parse apis from mbuf(v1.8.12)
* Dpvs: Add an ipset based tc classifier -- tc_cls_ipset(v1.8.12)
* Dpvs: Add config option "dedicated_queues" for bonding mode 4 (802.3ad)(v1.8.12)
* Keepalived: Add UDP_CHECK health checker(v1.8.12)
* Dpvs: Fix ipvs rr/wrr/wlc problem of uneven load distribution across dests
* Dpvs: Fix bonding mode 4 problem caused by LACP failure
* Keepalived: fix an exit problem when reload
* Dpvs: isolate kni ingress traffic using kni address flow
* Dpvs: update rss reta table according to configured workers after device bootup
* Dpvs: fix ipv6 neighbour ring full problem to kni isolated lcore
* Dpvs: add flame graph script for performance tests
* Dpvs: fix list/edit problem for MATCH type service (SNAT service)
* Dpvs: fix ICMPv6 sending failure problem caused by incorrect mtu
* Dpvs: fix a crash problem caused by incorrect mbuf pointer in IPv4 fragmentation
* Dpvs: fix dpvs worker blocking problem when async log is enabled
* Dpvs: make async log mempool size and log timestamp configurable
* Dpvs: enable dpvs log only when macro CONFIG_DPVS_LOG is defined
* Dpvs: fix some memory overflow problems when log messages are truncated
* Uoa: fix uoa data parse problem of ipv4 opp, and add a module parameter to parse uoa data in netfilter forward chain
* Dpvs: fix msg sequence duplicated problem in ipvs allow list
* Dpvs: fix icmp sending failure problem when no route cached in mbuf
* Dpvs: fix crash problem caused by using unsafe list macro in conhash
* Dpvs: fix compiling failure problem when icmp debug is enabled
* Dpvs: send tcp rst to both ends when snat conneciton expired
* Dpvs: fix incorrect uoa client sport in fnat64 problem
* Dpvs: fix incorrect oifname typo in MATCH type
* Keepalived: fix some compile problems found on ubuntu
* Dpvs: fix fullnat tcp forwarding problem when defer_rs_syn enabled
* Dpvs: use unified dest validation in mh scheduling algorithm
* Dpvs: expire quiescent connections after realserver was removed
* Ipvsadm: Use correct flag in listing ipvs connections
* Test: performance benchmark tests for v1.9.2
* Docs: update some docs
2 changes: 1 addition & 1 deletion test/flameGraph/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ outfile=$2
[ ! -f $infile ] && echo -e "can't find input perf.data file $inflie, please use `perf record` to generate it"
[ _$outfile = _ ] && echo -e "invalid out.file name" && exit 1

perf script -i perf.data &> perf.unfold
perf script -i $infile > perf.unfold
./stackcollapse-perf.pl perf.unfold &> perf.folded
./flamegraph.pl perf.folded > $outfile.svg
rm -f perf.unfold
Expand Down
22 changes: 22 additions & 0 deletions test/release/v1.9.2/performance.data
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
* TCP CPS/CC Tests
workers,cps;ipackets/pps,opackets/pps,ibytes/Bps,obytes/Bps;connections;pktRx,pktTx,bitsRx,bitsTx,dropTx
1,200000;1211533,1211847,99143458,102396220;1472000;600020,599988,393618488,382378808,0
2,360000;2166961,2166955,177320954,183100299;2701000;1072119,1076034,703360424,685830112,0
4,660000;3960726,3960788,324114391,334680450;4941000;1980045,1980054,1298916032,1261958232,0
8,1060000;6360626,6360628,520511025,537472046;7949000;3180092,3180068,2086137680,2026768232,0
10,1240000;7440784,7440727,608903706,628741279;9299000;3718514,3719316,2439334056,2370499504,0
16,1070000;6420639,6420548,525422150,542537169;8019000;3210000,3209989,2105751088,2045839664,0 (cross-numa-node)

* UDP PPS Tests
workers,connections;ipackets/pps,opackets/pps,ibytes/Bps,obytes/Bps;pktRx,pktTx,bitsRx,bitsTx,dropTx
1,2900;2900244,2900221,174014668,174013684;1449993,1450000,695996816,498800000,0
2,5000;5000418,5000370,300024968,300022497;2499954,2500000,1199978096,860000000,0
4,9200;9201066,9201048,552063906,552062986;4486101,4600001,2153329128,1582400344,0
8,9450;9451027,9451004,567061568,567060365;4723923,4724932,2267483216,1625376608,0

* Throughput Tests
workers,connections;ipackets/pps,opackets/pps,ibytes/Bps,obytes/Bps;pktRx,pktTx,bitsRx,bitsTx,dropTx
1,1000;1424608,1424599,1215824068,1215816616;712263,712285,4866168760,4860632840,0
2,1000;1424748,1424738,1215947746,1215939706;712247,712263,4866065328,4860482712,0
4,1000;1424876,1424870,1216052235,1216047912;712258,712238,4866134600,4860312112,0
8,1000;1424788,1424787,1215971428,1215970249;712261,712260,4866160976,4860462240,0
256 changes: 256 additions & 0 deletions test/release/v1.9.2/performance.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,256 @@
DPVS v1.9.2 Performance Tests
===

* [Test Platform](#platform)
* [TCP CPS/CC Tests](#cps/cc)
* [UDP PPS Tests](#pps)
* [Throughput Tests](#throughput)


<a id='platform'/>

# Test Platform

The performance of DPVS v1.9.2 is examined on two physical servers, one serves as DPVS server, and the other as both backend server(RS) and client(Client). RS and Client take advantages of [dperf](https://github.com/baidu/dperf), a high performance benchmark tool based on DPDK developed by baidu. The dperf server process and dperf client process use isolated NIC interfaces, CPU coers, and hugepage memory in order to run both processes on a single device.

### DPVS Server

+ CPU: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz, 2 Sockets, 12 Cores per Socket, 2 Threads per Core
+ Memory: 188G Bytes
+ NIC: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2
+ OS: Centos 7.6

### Dperf Server/Client

+ CPU: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, 2 Sockets, 10 Cores per Socket, 2 Threads per Core
+ Memory: 62G Bytes
+ NIC: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2
+ OS: Centos 7.6
+ Dperf: v1.2.0

<a id='cps/cc'/>

# TCP CPS/CC Tests

CPS(Connections per Second) and CC (Concurrent Connections) tests are performed by using the extreme small sized packets and variable `cps` of dperf clients. We gradually increase the `cps` of dperf clients until packet loss is seen in DPVS, and then the corresponding CPS and CC are the performance data that we need.

### Dperf Client

```
mode client
cpu 8-15
slow_start 60
tx_burst 128
launch_num 10
payload_size 1
duration 90s
protocol tcp
cps [refer to performance data]
port 0000:04:00.0 192.168.0.30 192.168.7.254
client 192.168.3.0 50
server 192.168.5.1 8
listen 80 1
```

### Dperf Server

```
mode server
cpu 0-7
tx_burst 128
payload_size 1
duration 100d
port 0000:04:00.1 192.168.1.30 192.168.7.254
client 192.168.0.28 1
client 192.168.1.28 1
client 192.168.1.30 1
client 192.168.3.0 200
server 192.168.6.100 8
listen 80 1
```

### DPVS

+ Service: 192.168.5.[1-8]:80, TCP, FullNAT, rr, syn-proxy off
+ Local IP: 192.168.3.[100-149]

```
TCP 192.168.5.1:80 rr
-> 192.168.6.100:80 FullNat 100 0 4
-> 192.168.6.101:80 FullNat 100 0 4
-> 192.168.6.102:80 FullNat 100 0 2
-> 192.168.6.103:80 FullNat 100 0 1
-> 192.168.6.104:80 FullNat 100 0 0
-> 192.168.6.105:80 FullNat 100 0 0
-> 192.168.6.106:80 FullNat 100 0 1
-> 192.168.6.107:80 FullNat 100 0 2
TCP 192.168.5.2:80 rr
-> 192.168.6.100:80 FullNat 100 0 1
-> 192.168.6.101:80 FullNat 100 0 2
...
...
```

### Performance Data

| workers | cps | ipackets/pps | opackets/pps | ibytes/Bps | obytes/Bps | connections | dperf:pktRx | dperf:pktTx | dperf:bitsRx | dperf:bitsTx | dperf:dropTx |
| ------- | --------- | ------------ | ------------ | ----------- | ----------- | ----------- | ----------- | ----------- | ------------- | ------------- | ------------ |
| 1 | 200,000 | 1,211,533 | 1,211,847 | 99,143,458 | 102,396,220 | 1,472,000 | 600,020 | 599,988 | 393,618,488 | 382,378,808 | 0 |
| 2 | 360,000 | 2,166,961 | 2,166,955 | 177,320,954 | 183,100,299 | 2,701,000 | 1,072,119 | 1,076,034 | 703,360,424 | 685,830,112 | 0 |
| 4 | 660,000 | 3,960,726 | 3,960,788 | 324,114,391 | 334,680,450 | 4,941,000 | 1,980,045 | 1,980,054 | 1,298,916,032 | 1,261,958,232 | 0 |
| 8 | 1,060,000 | 6,360,626 | 6,360,628 | 520,511,025 | 537,472,046 | 7,949,000 | 3,180,092 | 3,180,068 | 2,086,137,680 | 2,026,768,232 | 0 |
| 10 | 1,240,000 | 7,440,784 | 7,440,727 | 608,903,706 | 628,741,279 | 9,299,000 | 3,718,514 | 3,719,316 | 2,439,334,056 | 2,370,499,504 | 0 |
| 16 | 1,070,000 | 6,420,639 | 6,420,548 | 525,422,150 | 542,537,169 | 8,019,000 | 3,210,000 | 3,209,989 | 2,105,751,088 | 2,045,839,664 | 0 |


![CPS/CC](./pics/tcp_cps.png)

In case of 8-workers, DPVS v1.9.2 can establish **1,000,000 new connections per second**, and hold **8,000,000 concurrent connections** in the meanwhile. The performance gains approximately linearly when worker number is below 10. But an obvious performance loss is seen in 16-workers. One reason is that DPVS doesn't eliminate all racing conditions in datapath, and the problem gets worse with the increase of worker number. Besides, some DPVS workers are assigned to the CPU cores on NUMA socket different from that of NIC when running with 16-workers. Our DPVS server only has 12 CPU cores available per socket.

Let's make a deep insight into the `cpu-clock` events of DPVS with Linux performance analysis tool `perf`. We build DPVS with debug info and then run CPC/CC tests with 1-worker and 8-workers, with dperf `cps` configured to be 100,000 and 600,000 respectively. The performance flame graphs are shown below.

![perf-flame-worker-1](./pics/worker1.svg)

![perf-flame-worker-8](./pics/worker8.svg)

<a id='pps'/>

# UDP PPS Tests

In PPS tests, dperf clients keep a fixed `cps` of 3k and `keepalive` of 2ms, and adjust concurrent connections `cc` to generate different `pps` traffic. The same with CPS/CC tests, an extreme small payload of 1 bytes is used. We use UDP protocol for the tests. Besides, `tx_burst` in dperf client is set to 1 to reduce traffic surge.

### Dperf Client

```
mode client
cpu 8-15
slow_start 60
tx_burst 128
launch_num 1
payload_size 1
duration 90s
protocol udp
cps 3k
cc [refer to performance data]
keepalive 2ms
port 0000:04:00.0 192.168.0.30 192.168.7.254
client 192.168.3.0 50
server 192.168.5.1 8
listen 80 1
```
### Dperf Server

```
mode server
cpu 0-7
tx_burst 128
payload_size 1
duration 100d
protocol udp
keepalive 10s
port 0000:04:00.1 192.168.1.30 192.168.7.254
client 192.168.0.28 1
client 192.168.1.28 1
client 192.168.1.30 1
client 192.168.3.0 200
server 192.168.6.100 8
listen 80 1
```

### DPVS

+ Service: 192.168.5.[1-8]:80, UDP, FullNAT, rr, uoa off
+ Local IP: 192.168.3.[100-149]

```
UDP 192.168.5.1:80 rr
-> 192.168.6.100:80 FullNat 100 0 0
-> 192.168.6.101:80 FullNat 100 0 0
-> 192.168.6.102:80 FullNat 100 0 0
-> 192.168.6.103:80 FullNat 100 0 0
-> 192.168.6.104:80 FullNat 100 0 0
-> 192.168.6.105:80 FullNat 100 0 0
-> 192.168.6.106:80 FullNat 100 0 0
-> 192.168.6.107:80 FullNat 100 0 0
UDP 192.168.5.2:80 rr
-> 192.168.6.100:80 FullNat 100 0 0
-> 192.168.6.101:80 FullNat 100 0 0
...
...
```

### Performance Data

| workers | connections | ipackets/pps | opackets/pps | ibytes/Bps | obytes/Bps | dperf:pktRx | dperf:pktTx | dperf:bitsRx | dperf:bitsTx | dperf:dropTx |
| ------- | ----------- | ------------ | ------------ | ----------- | ----------- | ----------- | ----------- | ------------- | ------------- | ------------ |
| 1 | 2,900 | 2,900,244 | 2,900,221 | 174,014,668 | 174,013,684 | 1,449,993 | 1,450,000 | 695,996,816 | 498,800,000 | 0 |
| 2 | 5,000 | 5,000,418 | 5,000,370 | 300,024,968 | 300,022,497 | 2,499,954 | 2,500,000 | 1,199,978,096 | 860,000,000 | 0 |
| 4 | 9,200 | 9,201,066 | 9,201,048 | 552,063,906 | 552,062,986 | 4,486,101 | 4,600,001 | 2,153,329,128 | 1,582,400,344 | 0 |
| 8 | 9,450 | 9,451,027 | 9,451,004 | 567,061,568 | 567,060,365 | 4,723,923 | 4,724,932 | 2,267,483,216 | 1,625,376,608 | 0 |

![PPS](./pics/udp_pps.png)

As shown above, DPVS v1.9.2 can reach the peak of PPS (i.e, about 9,000,000 PPS) with 4-workers in the tests. We may need a 25G/100G NIC for a higher PPS test.

<a id='throughput'/>

# Throughput Tests

In throughput tests, dperf clients keep a fixed `cps` of 400 and `keepalive` of 1ms, and adjust concurrent connections `cc` to generate different `pps` traffic. The `payload_size` of both dperf server and dperf client are set to 800 bytes, and TCP protocol is used.

### Dperf Client

```
mode client
cpu 8-15
slow_start 60
tx_burst 128
launch_num 10
payload_size 800
duration 90s
protocol tcp
cps 400
cc [refer to performance data]
keepalive 1ms
port 0000:04:00.0 192.168.0.30 192.168.7.254
client 192.168.3.0 50
server 192.168.5.1 8
listen 80 1
```

### Dperf Server

```
mode server
cpu 0-7
tx_burst 128
payload_size 800
duration 100d
protocol tcp
keepalive 10s
port 0000:04:00.1 192.168.1.30 192.168.7.254
client 192.168.0.28 1
client 192.168.1.28 1
client 192.168.1.30 1
client 192.168.3.0 200
server 192.168.6.100 8
listen 80 1
```

## DPVS

DPVS configurations are the same with the `TCP CPS/CC Tests`.


### Performance Data

| workers | connections | ipackets/pps | opackets/pps | ibytes/Bps | obytes/Bps | dperf:pktRx | dperf:pktTx | dperf:bitsRx | dperf:bitsTx | dperf:dropTx |
| ------- | ----------- | ------------ | ------------ | ------------- | ------------- | ----------- | ----------- | ------------- | ------------- | ------------ |
| 1 | 1,000 | 1,424,608 | 1,424,599 | 1,215,824,068 | 1,215,816,616 | 712,263 | 712,285 | 4,866,168,760 | 4,860,632,840 | 0 |
| 2 | 1,000 | 1,424,748 | 1,424,738 | 1,215,947,746 | 1,215,939,706 | 712,247 | 712,263 | 4,866,065,328 | 4,860,482,712 | 0 |
| 4 | 1,000 | 1,424,876 | 1,424,870 | 1,216,052,235 | 1,216,047,912 | 712,258 | 712,238 | 4,866,134,600 | 4,860,312,112 | 0 |
| 8 | 1,000 | 1,424,788 | 1,424,787 | 1,215,971,428 | 1,215,970,249 | 712,261 | 712,260 | 4,866,160,976 | 4,860,462,240 | 0 |

![Throughput](./pics/tcp_throughput.png)

As shown above, DPVS v1.9.2 fills with ease the full bandwith of 10G NIC using only one worker.
Binary file added test/release/v1.9.2/pics/tcp_cps.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added test/release/v1.9.2/pics/tcp_throughput.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added test/release/v1.9.2/pics/udp_pps.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 9be4822

Please sign in to comment.