You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all,
I made some benchmark tests with our gns3vm servers (2.2.40.1). It was really good, but I found one bottleneck.
I used iperf3 in order to get a measurable throughput.
Linux_A to Debian
PASS: ping
PASS: iperf3 ~2Gbit/s
Cluster, working, but slow:
Linux_A is connected directly via eth1 with GNS_Cloud_A
Linux_B is connected directly via eth1 with GNS_Cloud_B
PASS: ping from Linux_A to Linux_B
PASS: iperf3 ~50 Mbit/s
Cluster with bottleneck:
Linux_A is connected via br1 with GNS_Cloud_A
Linux_B is connected via br1 with GNS_Cloud_B
FAIL: ping
Error:
prePing - arp request, is sent from Linux_A to Linux_B
I am able to see the reply with tcpdump at interfaces A_br1 and A_eth1 but not at gns3tap0-0
or with wireshark at cable to Linux_A
Workaround:
I set static arp addresses on both sides.
PASS: ping
PASS: iperf3 about 1 Gbit/s
Question:
Is there a way to configure the vm without that workaround?
I monitored iptables, arptables, ebtables and different driver settings without any hint of a drop.
Unicasts from Debian working fine, but every unicast frame from Linux_B to Linux_A
hits A_br1 and A_eth1 but not gns3tap0-0 or at cable to Linux_A.
It is simply gone ;/
The text was updated successfully, but these errors were encountered:
@Sardaukai Can you share a little more details?
I am not sure I understood the setup you have in the illustration.
I have tried to connect my GNS3 VM guests to the LAN using a Cloud node and found a way to make it work very very fast.
It works as expected, please take a look at: https://www.gns3.com/slow-speeds-on-cloud-node-anyone-else
Picture for illustration
Hi all,
I made some benchmark tests with our gns3vm servers (2.2.40.1). It was really good, but I found one bottleneck.
I used iperf3 in order to get a measurable throughput.
Linux_A to Debian
PASS: ping
PASS: iperf3 ~2Gbit/s
Cluster, working, but slow:
Linux_A is connected directly via eth1 with GNS_Cloud_A
Linux_B is connected directly via eth1 with GNS_Cloud_B
PASS: ping from Linux_A to Linux_B
PASS: iperf3 ~50 Mbit/s
Cluster with bottleneck:
Linux_A is connected via br1 with GNS_Cloud_A
Linux_B is connected via br1 with GNS_Cloud_B
FAIL: ping
Error:
prePing - arp request, is sent from Linux_A to Linux_B
I am able to see the reply with tcpdump at interfaces A_br1 and A_eth1 but not at gns3tap0-0
or with wireshark at cable to Linux_A
Workaround:
I set static arp addresses on both sides.
PASS: ping
PASS: iperf3 about 1 Gbit/s
Question:
Is there a way to configure the vm without that workaround?
I monitored iptables, arptables, ebtables and different driver settings without any hint of a drop.
Unicasts from Debian working fine, but every unicast frame from Linux_B to Linux_A
hits A_br1 and A_eth1 but not gns3tap0-0 or at cable to Linux_A.
It is simply gone ;/
The text was updated successfully, but these errors were encountered: