You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ambassador v3 Memory Usage
We are running Ambassador with linkerd mesh in multiple environments. In 2 of the environments, there has been a progressive increase of memory usage by the emissary ingress pods(5) averaging around. The memory usage is the same across all the pods. The increase in memory usage is not related to traffic patterns, of which there was an expectation.
Multiple pods have experienced OOM errors without a spike in traffic or change in mappings.
One environment has 200 mappings and another has 110 mappings
No performance issues or errors
at limit 512MiB memory usage averaged 90-99% - OOM errors experienced
at limit 768MiB memory usage averaged 90-99% - no OOM errors experienced
at limit 1024MiB memory usage averaged 70-80% - no OOM errors experienced
The mappings are restricted by host, there is no regex patterns on host, but a lot of regex patterns exist on headers and paths
on analysis of ambassador process
Functions such as json.(*encodeState), json.structEncoder, and json.sliceEncoder have significant allocations, with json.(*encodeState) utilizing over 47% of the total memory allocation in some cases.
The bytes.(*Buffer).Grow function also consumes a large portion of memory, with allocations up to 42.8%, indicating substantial buffer growth during data processing.
There is usage in JSON encoding, with multiple encoder types (mapEncoder, ptrEncoder, stringEncoder) being prominent. This suggests that JSON serialization contributes heavily to memory usage.
Functions such as reflect.copyVal and reflect.mapassign_faststr0 also have non-negligible memory footprints, likely due to reflection operations involved in data handling.
Expected behavior
I'm not sure of the normal behaviour under the above conditions, is this normal behaviour for ambassador.
Based on traffic patterns we expect memory usage to increase or fluctuate based on traffic and configuration changes.
Versions
Ambassador: [3.9.1]
Kubernetes environment [EKS]
Version [ 1.29]
The text was updated successfully, but these errors were encountered:
Ambassador v3 Memory Usage
We are running Ambassador with linkerd mesh in multiple environments. In 2 of the environments, there has been a progressive increase of memory usage by the emissary ingress pods(5) averaging around. The memory usage is the same across all the pods. The increase in memory usage is not related to traffic patterns, of which there was an expectation.
Multiple pods have experienced OOM errors without a spike in traffic or change in mappings.
One environment has 200 mappings and another has 110 mappings
No performance issues or errors
at limit 512MiB memory usage averaged 90-99% - OOM errors experienced
at limit 768MiB memory usage averaged 90-99% - no OOM errors experienced
at limit 1024MiB memory usage averaged 70-80% - no OOM errors experienced
The mappings are restricted by host, there is no regex patterns on host, but a lot of regex patterns exist on headers and paths
on analysis of ambassador process
Functions such as json.(*encodeState), json.structEncoder, and json.sliceEncoder have significant allocations, with json.(*encodeState) utilizing over 47% of the total memory allocation in some cases.
The bytes.(*Buffer).Grow function also consumes a large portion of memory, with allocations up to 42.8%, indicating substantial buffer growth during data processing.
There is usage in JSON encoding, with multiple encoder types (mapEncoder, ptrEncoder, stringEncoder) being prominent. This suggests that JSON serialization contributes heavily to memory usage.
Functions such as reflect.copyVal and reflect.mapassign_faststr0 also have non-negligible memory footprints, likely due to reflection operations involved in data handling.
Expected behavior
I'm not sure of the normal behaviour under the above conditions, is this normal behaviour for ambassador.
Based on traffic patterns we expect memory usage to increase or fluctuate based on traffic and configuration changes.
Versions
The text was updated successfully, but these errors were encountered: