Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] improve default constrains #84

Open
DavyLandman opened this issue Dec 16, 2020 · 3 comments
Open

[Feature] improve default constrains #84

DavyLandman opened this issue Dec 16, 2020 · 3 comments
Labels
enhancement New feature or request

Comments

@DavyLandman
Copy link
Contributor

Is your feature request related to a problem? Please describe.
People tend to accept the defaults, so maybe we should improve them a bit. I've had to help people constrain the memory of their containers quite a bit already. (Giving the containers that run a JVM inside all the memory is a good way to invoke the oom-killer from the kernel).

Describe the solution you'd like
Every container should have a suggested memory limit. But it should be relative to the available memory of the running server. So maybe in the wizard you have to provide if you run with 4,8,16, or 32 GB of memory.

In that case we get better suggestions for at least memory limits of different containers.

Describe alternatives you've considered
Small example of better memory limits: (most likely I'm missing some)

  • polystore-api should get 1 GB for small deployments and 4 GB for bigger ones
  • typhonql-server should get 1gb for small and 8GB for bigger (although maybe 16gb would also be good for 32gb setup)
  • neo4j should be constrainted to 4, 8 or 16gb depending on the available memory
  • the analytics containers can also be constrained to 2 or 3 GB (as they are not dealing with large data sets)
  • In general, we only have to really constrain the java processes, due to GC they are a bit more memory hungy and require tuning. so mongodb and maria behave nicely. but cassandra should be constained.
@DavyLandman DavyLandman added the enhancement New feature or request label Dec 16, 2020
@zolotas4
Copy link

Regarding the analytics containers (i.e., Flink JobManager and TaskManager) we cannot provide with any suggestions as these rely heavily on the analytics scenarios users will implement. For example, if they collect all the queries ever sent in a Java List this can grow to several GBs. If they do simple calculations and then discard the data, then a few GBs would be enough. So, it is scenario specific. Also, if they run several scenarios in parallel, memory usage accumulates.

@DavyLandman
Copy link
Contributor Author

DavyLandman commented Dec 16, 2020

This is about defaults. By default if you do nothing, they get no constrains at all. so if a machine has 32GB, the java containers will all take quite chunk out of that. I've had too many people running into oom killer due to java processes claiming 4/8GB of memory since they want to avoid GC.

The wizard still allows you to configure it. So if the case requires it, increase the suggested limits.

@zolotas4
Copy link

Regarding Flink TaskManager and JobManager images, Flink has defaults already set. We understand that those defaults set the maximum memory that Flink containers will ever take and you don't need to set container limits (they are around 1.5-1.8GB). However, as Flink has changed its memory management recently, we will take look again and verify our previous understanding.
For the auth-all container, as this does not do anything heavy we believe that 1 or 2GB will be more than enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants