You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of, many thanks for configuring this container. I want to quickly mention some context about the ram usage and the solution mentioned in the readme:
High memory usage by samba
a. Multiple people have reported high memory usage that's never freed by the samba processes. Recommended work around below:
Add the -m 512m option to docker run command, or mem_limit: in docker_compose.yml files, IE:
This memory usage in this case is most likely the file caching in ram performed by the linux kernel (see end of my post). Here, the kernel will use up whatever physical memory that is not actively used to cache files from the file system. This way, subsequent reads of the same files can be served from ram, which is faster and saves read/write cycles for the storage medium.
Basically there is little downside to this, as the kernel will free up the cache the moment memory is needed for something more important. Nor will the kernel ever (knowingly) use swap memory for caching or make other programs use the swap memory because of the cache. Hence, the improved performance (it wont help with write speeds but can help with read speed) and because of the reduced disk wear it is likely better to let the kernel do its thing and not artificially limit its ram.
I suggest to mentioning this in the readme to help increase performance of the docker (and not have others like me worry about the ram usage)
Btw, you can check memory usage distribution using the free --giga command. This is the output from my server:
free --giga
total used free shared buff/cache available
Mem: 33 2 0 0 31 31
Swap: 38 0 38
Here, "used" is the actual memory programs are using and "cache" the file system cache. Meanwhile, Portainer reports 32 GB ram usage of the container, so including the cached files, while docker stats shows the actually used memory properly:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0856c407790e samba 0.71% 32.54MiB / 31.1GiB 6.35% 0B / 0B 58MB / 82.8GB 12
Hence, as long as a user does not encounter OOM issues, the memory limitation is not needed and likely disadvantages.
The text was updated successfully, but these errors were encountered:
Hi,
First of, many thanks for configuring this container. I want to quickly mention some context about the ram usage and the solution mentioned in the readme:
This memory usage in this case is most likely the file caching in ram performed by the linux kernel (see end of my post). Here, the kernel will use up whatever physical memory that is not actively used to cache files from the file system. This way, subsequent reads of the same files can be served from ram, which is faster and saves read/write cycles for the storage medium.
Basically there is little downside to this, as the kernel will free up the cache the moment memory is needed for something more important. Nor will the kernel ever (knowingly) use swap memory for caching or make other programs use the swap memory because of the cache. Hence, the improved performance (it wont help with write speeds but can help with read speed) and because of the reduced disk wear it is likely better to let the kernel do its thing and not artificially limit its ram.
I suggest to mentioning this in the readme to help increase performance of the docker (and not have others like me worry about the ram usage)
Btw, you can check memory usage distribution using the free --giga command. This is the output from my server:
Here, "used" is the actual memory programs are using and "cache" the file system cache. Meanwhile, Portainer reports 32 GB ram usage of the container, so including the cached files, while docker stats shows the actually used memory properly:
Hence, as long as a user does not encounter OOM issues, the memory limitation is not needed and likely disadvantages.
The text was updated successfully, but these errors were encountered: