You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to test out the capabilities of Zenoh in handling large pointclouds and it seems like I'm encountering a memory leak when testing with both synthetic data and livestreamed data from one of our sensors. It seems if the bandwidth exceeds around 16 MB/s the memory used by the node that is publishing the data will slowly creep up until the node is killed by the OOM Killer. This is on Ubuntu Noble with ROS Rolling built from source and rmw_zenoh using the yadu/events branch rebased onto rolling. I have verified that the memory usage is stable over the same time period while using rmw_cyclonedds_cpp
And this is the command I'm using to run the node: ./talker 100000 and you can bump that up to ./talker 1000000 to make the OOM Killer kill the node sooner.
The text was updated successfully, but these errors were encountered:
It seems like the RMW is keeping a copy of every message that's getting published. Running ./talker 500 has a cumulative bandwidth of about 80 KB/s * 3 or 240 KB/s. When logging the total memory of the program using this command while true; do cat /proc/$pid/smaps | grep -i pss | awk '{Total+=$2} END {print systime() " " Total/1024/1024" GB"}' >> $pid.mem_usage && sleep 1; done the slope of the accumulation is about 220 KB/s.
Hi,
I'm trying to test out the capabilities of Zenoh in handling large pointclouds and it seems like I'm encountering a memory leak when testing with both synthetic data and livestreamed data from one of our sensors. It seems if the bandwidth exceeds around 16 MB/s the memory used by the node that is publishing the data will slowly creep up until the node is killed by the OOM Killer. This is on Ubuntu Noble with ROS Rolling built from source and
rmw_zenoh
using theyadu/events
branch rebased ontorolling
. I have verified that the memory usage is stable over the same time period while usingrmw_cyclonedds_cpp
Here is the code I'm using for the synthetic data: https://github.com/AlexDayCRL/ros2-message-benchmark/blob/master/src/talker.cpp
And this is the command I'm using to run the node:
./talker 100000
and you can bump that up to./talker 1000000
to make the OOM Killer kill the node sooner.The text was updated successfully, but these errors were encountered: