Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in vmxnet when using XenApp receiver #32

Open
tchmelar opened this issue Aug 11, 2015 · 1 comment
Open

Memory leak in vmxnet when using XenApp receiver #32

tchmelar opened this issue Aug 11, 2015 · 1 comment

Comments

@tchmelar
Copy link

Hi,

I'm experiencing a memory leak in vmxnet driver. When using XenApp receiver from Citrix to access a remote desktop, vmxnet fills dmesg with messages

eth0: Runt pkt (55 bytes) entry 31!

It's happening on Debian (wheezy), which routes traffic between two Ethernets - server on one eth0, client on second eth1. It's installed in ESXi 5.1.0

I saw a similar problem on forum and when I blindly tried to change vmxnet.c:2713 to

if (skb->len < (ETH_MIN_FRAME_LEN - 5)) {

memory consumption was fixed. But I'm not sure, what it could cause in other parts of code.

Used software:
open-vm-tools-9.4.0-1280544

@iamasmith
Copy link

This could well be related to TCP Segment Offload in the vmxnetx driver and the giant frames used on the internal vmware switch. I had a similar problem with a simple test environment, 2 hosts on ESX virtual switches joined by a host on the same box acting as a gateway. All systems reported a consistent MTU setting of 1500 bytes but the gateway was complaining that the frame received were 64K. I had to disable TSO on the client and server to overcome this.
If it's a similar issue you will find that small frame connectionless traffic like ICMP PING with a small size get through and the problem starts to loom when you are using TCP since that's where TSO would get used if the NIC is set to support it. Normally this wouldn't be an issue because a) All hosts on the same Hypervisor would be using TSO and could route large frames between themselves and b) When traffic went off box or connections come in the Hypervisor could fragment/build the large frames to accomodate the change. The difficulty comes when a host on the Hypervisor believes frames should be a certain size according to MTU but TSO puts entirely different sized giant frames on the virtual switch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants