-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lower throughput with only one thread in vxlan test #2
Comments
roidayan
pushed a commit
that referenced
this issue
Feb 9, 2022
This test will verify unidirectional UDP offload via 4 test scenarios: Case #1: TCP cons in 'new' state will not be offloaded Case #2: UDP cons in 'new' state will be offloaded - After warm up unidirectional UDP traffic, unidirectional UDP packets can not be captured via REP. Case #3: UDP cons changed to 'est' state will remove 'new' from hw and be offloaded again - After unidirectional UDP traffic were offloaded, send response packets from the other side to make the UDP connections' CT state changed to 'est' and the offloaded rule for the unidirectional UDP flows will be removed from hw and new rules for the 'est' UDP flows will offloaded. During this period, packets will can be captured again via REP. After the switchover, there will be no packets can be captured via REP. Case #4: UDP cons in 'new' state scale test - scale test for unidirectional UDP flow offload Issue: 2829954 Change-Id: Ie18de3a8c77e40bc4a397e50179118c6bd924827 Signed-off-by: Gavin Li <[email protected]>
roidayan
pushed a commit
that referenced
this issue
Oct 27, 2022
This test will verify unidirectional UDP offload via 4 test scenarios: Case #1: TCP cons in 'new' state will not be offloaded Case #2: UDP cons in 'new' state will be offloaded - After warm up unidirectional UDP traffic, unidirectional UDP packets can not be captured via REP. Case #3: UDP cons changed to 'est' state will remove 'new' from hw and be offloaded again - After unidirectional UDP traffic were offloaded, send response packets from the other side to make the UDP connections' CT state changed to 'est' and the offloaded rule for the unidirectional UDP flows will be removed from hw and new rules for the 'est' UDP flows will offloaded. During this period, packets will can be captured again via REP. After the switchover, there will be no packets can be captured via REP. Case #4: UDP cons in 'new' state scale test - scale test for unidirectional UDP flow offload Issue: 2829954 Change-Id: Ie18de3a8c77e40bc4a397e50179118c6bd924827 Signed-off-by: Gavin Li <[email protected]>
roidayan
pushed a commit
to roidayan/ovs-tests
that referenced
this issue
Feb 13, 2023
On bf2 kernel, hw_packets only exists in the last action. Get the counter from the last action to make all kernels pass for this test. The missing commit d4d9d9c53bef ("sched: act_pedit: Implement stats_update callback") filter protocol ip pref 1 flower chain 2 handle 0x2 eth_type ipv4 ip_proto tcp ip_flags nofrag ct_state +trk+est in_hw in_hw_count 1 action order 1: pedit action pipe keys 1 index 3 ref 1 bind 1 installed 1564 sec used 400 sec firstused 1562 sec key #0 at ipv4+8: add ff000000 mask 00ffffff Action statistics: Sent 28707472 bytes 19811 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 action order 2: pedit action pipe keys 4 index 4 ref 1 bind 1 installed 1564 sec used 400 sec firstused 1562 sec key #0 at eth+4: val 00000c42 mask ffff0000 key Mellanox#1 at eth+8: val a158ab99 mask 00000000 key Mellanox#2 at eth+0: val 0c42a158 mask 00000000 key Mellanox#3 at eth+4: val ab910000 mask 0000ffff Action statistics: Sent 28707472 bytes 19811 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 action order 3: csum (iph, tcp) action pipe index 2 ref 1 bind 1 installed 1564 sec used 400 sec firstused 1562 sec Action statistics: Sent 28707472 bytes 19811 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 action order 4: mirred (Egress Redirect to device enp8s0f1) stolen index 2 ref 1 bind 1 installed 1564 sec used 0 sec firstused 1562 sec Action statistics: Sent 6098268655490 bytes 4018559682 pkt (dropped 0, overlimits 0 requeues 0) Sent software 28707472 bytes 19811 pkt Sent hardware 6098239948018 bytes 4018539871 pkt backlog 0b 0p requeues 0 Issue: 3310161 Change-Id: If581e31be54a85681468e7a21f627378411d78c8 Signed-off-by: Chris Mi <[email protected]>
roidayan
pushed a commit
to roidayan/ovs-tests
that referenced
this issue
Nov 17, 2024
Service restart removes ports from the bridge, and on SimX this takes a while. Service stop/start/restart is done via ovs-ctl script, which after short wait just kills the ovs-vswitchd process that its trying to terminate, causing leaks on Simx. Below is a stack trace by gdb to show SIGTERM in middle of ovs cleanup. Instead ovs-vsctl set port and service restart so it will take affect, add the required port options at configuration and remove the service restart. gdb stack trace Thread 1 "ovs-vswitchd" received signal SIGTERM, Terminated. 0x0000000001681686 in rte_spinlock_lock (sl=0x11802d18f8) at ../lib/eal/x86/include/rte_spinlock.h:28 28 asm volatile ( (gdb) #0 0x0000000001681686 in rte_spinlock_lock (sl=0x11802d18f8) at ../lib/eal/x86/include/rte_spinlock.h:28 Mellanox#1 0x00000000016a0acf in mlx5_hws_cnt_pool_destroy (sh=0x11802d0fc0, cpool=0x11bc7ffdc0) at ../drivers/net/mlx5/mlx5_hws_cnt.c:725 Mellanox#2 0x00000000015f4191 in __flow_hw_resource_release (dev=0x799ed80 <rte_eth_devices>, ctx_close=false) at ../drivers/net/mlx5/mlx5_flow_hw.c:12351 Mellanox#3 0x00000000015f5acb in flow_hw_resource_release (dev=0x799ed80 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_flow_hw.c:12843 Mellanox#4 0x00000000005d8182 in mlx5_dev_close (dev=0x799ed80 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5.c:2448 #5 0x0000000001c3e14e in rte_eth_dev_close (port_id=0) at ../lib/ethdev/rte_ethdev.c:1605 #6 0x0000000001f892ce in ovs_doca_dev_close (dev_data=0x11802d2bd0) at lib/ovs-doca.c:1463 #7 0x0000000001f4487b in netdev_dpdk_destruct (netdev=0x11802d2880) at lib/netdev-dpdk.c:1670 #8 0x0000000001e2992c in netdev_destroy (dev=0x11802d2880) at lib/netdev.c:605 #9 0x0000000001e29a14 in netdev_unref (dev=0x11802d2880) at lib/netdev.c:632 #10 0x0000000001e29a40 in netdev_close (netdev=0x11802d2880) at lib/netdev.c:643 #11 0x0000000001d3c137 in ofport_destroy__ (port=0x9183cc0) at ofproto/ofproto.c:2664 #12 0x0000000001d3c198 in ofport_destroy (port=0x9183cc0, del=false) at ofproto/ofproto.c:2673 #13 0x0000000001d3a047 in ofproto_destroy (p=0x94d86e0, del=false) at ofproto/ofproto.c:1777 #14 0x0000000001d2a57d in bridge_destroy (br=0x94d8060, del=false) at vswitchd/bridge.c:3623 #15 0x0000000001d21a09 in bridge_exit (delete_datapath=false) at vswitchd/bridge.c:556 #16 0x0000000001d2f2ef in main (argc=12, argv=0x7ffe4ce5bd68) at vswitchd/ovs-vswitchd.c:149 Issue: 3965553 Change-Id: Ia80346d9f1ca24d73b247dde9e505639b47dbb5e Signed-off-by: Paul Blakey <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I've been running the test-ovs-vxlan.sh script to help me understand how Mellanox TC based HW offloads work and I've found something a bit odd. The iperf test in the script uses 3 threads and when HW offloads are enabled the total throughput is usually at least double or more than when the offloads are disabled. However, while trying different numbers of threads I found that a single thread will produce higher throughput with HW offloads disabled. Once two or more threads are used then the HW offload throughput is higher than without the HW offloads.
Do you know why a single thread would be faster with HW offloads disabled?
The text was updated successfully, but these errors were encountered: