-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check DataPlane latencies after Traffic Shapping at the host level #48
Comments
Setup :
Actually there are different cases to consider : VM to VM traffic using private ips (same private network)Here are the results of one ping between two VMs hosted in different compute nodes using their private ips.
The overhead of 0.552 ms in average is probably due to the encapsulation/decapsulation of the packets. packets are encapsulated and use the ips of the relevant compute node while traversing vxlan tunnels. Traffic shaping rules will thus be applied as expected VM to VM using private ips (different private networks - no DVR)DVR = Distributed Virtual Routing
This is expected since all the traffic leverage the centralized L3 agent hosted in neutron. VM to VM using private ips (different private networks - with DVR)
Traffic is encapsulated and goes from compute node 1 to compute node 2.
Traffic don't got to the network node (DVR is activated) so traffic stays local. VM to VM traffic using floating ips and no DVR
Using the floating introduce an extra hop (the network node). There was 20ms delay between each pair of host so we got 40ms delay using the floating ip (DVR wasn't activated). Once again packets leaving a compute are encapsulated and will be directed to the network node first and then to the second compute node. VM to VM traffic using floating ips and DVR activated
Here traffic shaping rules aren't applied. This due to the way DVR handle packet and it needs more investigation. This is probably due to the fact that packet aren't encapsulated and the destination is directly the floating ip of the VM (and not the compute nodes's one as we would have if the packet was encapsulated) VM to external
We got an extra latency due to the traffic between the compute and the network. This was expected. External to VM (DVR)
When DVR is activated, traffic doesn't follow traffic shapping rules. This is kind of expected due the previous result with DVR. This is probably due to the fact that traffic goes directly to the compute node hosting the VM. External to VM (No DVR)
Without DVR this is consistent with the rules applied. This explained by the fact that all the external traffic flows through the network node and then is encapsulated in destination of the compute node hosting the VM. |
@alebre I think the short answer is :
(1) Did I miss some other cases? |
Important, please note that if you do not enable DVR, all L3 communications go thought the network controller. |
Once TC network constraints have been defined at the NICs level (whatever the number of NICs), what is the latency we can expect at the level of the DataPlane (i.e. from the VMs that are executed on the hosts).
The text was updated successfully, but these errors were encountered: