Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Outdated information about memory limits #621

Open
mkuratczyk opened this issue Mar 14, 2023 · 11 comments
Open

Outdated information about memory limits #621

mkuratczyk opened this issue Mar 14, 2023 · 11 comments
Assignees

Comments

@mkuratczyk
Copy link
Contributor

The Memory Limits section of https://hub.docker.com/_/rabbitmq is incorrect (perhaps was true a long time ago). Container limits are taken into account, so vm_memory_high_watermark.relative will be calculated based on container memory, not host memory. I think the whole section can be simply removed.

@tianon
Copy link
Member

tianon commented Mar 15, 2023

Ah, good catch! I won't be able to get to it right away myself, but that's maintained at https://github.com/docker-library/docs/blob/master/rabbitmq/content.md if you (or someone else reading this) wanted to take a stab at a PR. 👍

@mkuratczyk
Copy link
Contributor Author

Turns out this can still happen. A user created an issue today (using up to date version of RabbitMQ and Erlang) and I noticed:

Memory high watermark setting: 0.4 of available memory, computed to: 108.0799 gb

I'm fairly sure this is the host's memory, not container memory limit. I wonder what could be causing this in some situations.

@michaelklishin
Copy link
Collaborator

Erlang's imperfect detection of the limits using cgroups? cgroup version differences (IIRC some distribution have started rolling out v2 recently)?

@Zerpet
Copy link

Zerpet commented Apr 5, 2023

In rabbitmq cluster operator, we set total_memory_available_override_value to 80% of the resource limit in the Pod i.e. limit by cgroup. We could document this setting in the Memory Limits section, if Erlang doesn't play well with cgroups.

@lukebakken lukebakken self-assigned this Apr 10, 2023
@luarx
Copy link

luarx commented Apr 11, 2024

I'm fairly sure this is the host's memory, not container memory limit. I wonder what could be causing this in some situations.

Is there any solution for this? 🙏
vm_memory_high_watermark.relative is not working when using rabbitmq 3 with Kubernetes memory limits and I think that it used to work in the past (Reference -> https://www.rabbitmq.com/kubernetes/operator/using-operator#resource-reqs)

@michaelklishin
Copy link
Collaborator

@luarx no, it never did work reliably because the runtime's ability to detect the amount of available memory is only limited to some OSes and likely cgroups v1 for now.

The docs have been updated to provide Kubernetes-specific recommendations.

@michaelklishin
Copy link
Collaborator

@tianon @yosifkit folks, can we integrate these recommendations from the RabbitMQ docs into this image, and then hopefully consider this issue resolved?

I would be happy to contribute if you tell me where to look in the source, and how to test a preview of my edits locally. Thank you!

@tianon
Copy link
Member

tianon commented Apr 15, 2024

That's in https://github.com/docker-library/docs/blob/master/rabbitmq/content.md 👀 (as linked above 😅)

If you want to test the Docker Hub rendering, you can just create a repository and there's a preview in the private "edit" page, but I think markdown is probably limited enough that you can probably guess how it'll render pretty well (Hub's rendering is pretty vanilla). 😇

@JefSeys
Copy link

JefSeys commented Oct 16, 2024

Any chance there is an update on this issue? I'd like to set the free disk space but this is not supported atm AFAIK.

@michaelklishin
Copy link
Collaborator

@JefSeys this issue has nothing to do with free disk space limits or how this image approaches configuration.

You can set the free disk space alarm threshold using rabbitmq.conf, just like 99% of other settings that can be set there. Unlike environment variables, it offers validation.

Any further comments or questions about free disk space limits will be deleted. This repository has discussions enabled, please start a new one instead of commenting on an open issue that is about a different limit in a specific context.

@LaurentGoderre
Copy link
Member

I would argue that the docs as they are now are sufficient. It provides a brief overview and a link to an authoritative source for more details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants