Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quick on memory usage #462

Open
J0pp3rt opened this issue Aug 6, 2024 · 0 comments
Open

Quick on memory usage #462

J0pp3rt opened this issue Aug 6, 2024 · 0 comments

Comments

@J0pp3rt
Copy link

J0pp3rt commented Aug 6, 2024

Hi,

First of, many thanks for configuring this container. I want to quickly mention some context about the ram usage and the solution mentioned in the readme:

  • High memory usage by samba

a. Multiple people have reported high memory usage that's never freed by the samba processes. Recommended work around below:
Add the -m 512m option to docker run command, or mem_limit: in docker_compose.yml files, IE:

sudo docker run -it --name samba -m 512m -p 139:139 -p 445:445
-v /path/to/directory:/mount
-d dperson/samba -p

This memory usage in this case is most likely the file caching in ram performed by the linux kernel (see end of my post). Here, the kernel will use up whatever physical memory that is not actively used to cache files from the file system. This way, subsequent reads of the same files can be served from ram, which is faster and saves read/write cycles for the storage medium.
Basically there is little downside to this, as the kernel will free up the cache the moment memory is needed for something more important. Nor will the kernel ever (knowingly) use swap memory for caching or make other programs use the swap memory because of the cache. Hence, the improved performance (it wont help with write speeds but can help with read speed) and because of the reduced disk wear it is likely better to let the kernel do its thing and not artificially limit its ram.

I suggest to mentioning this in the readme to help increase performance of the docker (and not have others like me worry about the ram usage)

Btw, you can check memory usage distribution using the free --giga command. This is the output from my server:

free --giga
               total        used        free      shared  buff/cache   available
Mem:              33           2           0           0          31          31
Swap:             38           0          38

Here, "used" is the actual memory programs are using and "cache" the file system cache. Meanwhile, Portainer reports 32 GB ram usage of the container, so including the cached files, while docker stats shows the actually used memory properly:

CONTAINER ID   NAME          CPU %     MEM USAGE / LIMIT    MEM %     NET I/O           BLOCK I/O         PIDS
0856c407790e   samba         0.71%     32.54MiB / 31.1GiB    6.35%     0B / 0B           58MB / 82.8GB     12

Hence, as long as a user does not encounter OOM issues, the memory limitation is not needed and likely disadvantages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant