Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huge pages support #2258

Closed
Remp69 opened this issue Feb 5, 2021 · 5 comments · Fixed by #3581 · May be fixed by CrunchyData/crunchy-containers#1520
Closed

Huge pages support #2258

Remp69 opened this issue Feb 5, 2021 · 5 comments · Fixed by #3581 · May be fixed by CrunchyData/crunchy-containers#1520

Comments

@Remp69
Copy link

Remp69 commented Feb 5, 2021

**What is the motivation or use case for the change? **
Improve performances for large databases

Describe the solution you'd like
Huge pages support by adding declaration of hugepages-2Mi resource limit for the postgresql container.
https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/

Please tell us about your environment:

  • Where is this running ( Local, Cloud Provider) local
  • Container Image Tag: 4.5.1
  • PostgreSQL Version: 13.1
  • Platform (Docker, Kubernetes, OpenShift): kubernetes
  • Platform Version: 1.18
@jkatz
Copy link
Contributor

jkatz commented Feb 5, 2021

Thanks for the suggestion.

In the interim, you can modify the cluster-deployment.json entry in the pgo-config ConfigMap and add the mount points for the huge pages. After restarting the Operator Pod, any new cluster you create will include the huge pages mount.

@ns-rsuvorov
Copy link

We are running into the same issue it seems. But we could not locate the cluster-deployment.json entry in the pgo-config ConfigMap. Has something changed since the above response?
We are using PGO 5.1.0.

@cbandy
Copy link
Member

cbandy commented Jun 15, 2022

@ns-rsuvorov in v5 you can set any compute request or limit in the resources field of the PostgresCluster spec.

@jdambly-ns
Copy link

jdambly-ns commented Jun 15, 2022

@cbandy yes while we can add that to the, we still need to be able to set volume mounts which is not allowed, here is a patch we created to work around the issue

{
	"spec": {
		"template": {
			"spec": {
				"containers": [{
					"name": "database",
					"resources": {
						"limits": {
							"hugepages-2Mi": "100Mi"
						}
					},
					"volumeMounts": [{
						"mountPath": "/hugepages-2Mi",
						"name": "hugepage-2mi"
					}]
				}],
				"volumes": [{
					"name": "hugepage-2mi",
					"emptyDir": {
						"meduim": "HugePages-2Mi"
					}
				}]
			}
		}
	}
}

what concerns me is that even if we set huge_pages: off in parameters, it's not making into postgres configMap, I can see it on the cluster configmap for patroni, but it doesn't seem to give that to the postgres process that is running the db

@benjaminjb
Copy link
Contributor

Hi, thanks for the feedback!

We've been running some tests with huge_pages and have determined that you shouldn't have to mount additional volumes to use huge_pages. You should be able to request those through the postgrescluster spec directly in the resources field.

There are also some additional considerations when requesting huge_pages:

  1. for one thing, your underlying Kubernetes nodes have to have huge_pages enabled;
  2. for another thing, your PG instance needs to be trying to use huge_pages.

Point (2) shouldn't be an issue here because the PG default for huge_pages is try. (This actually causes some problems with users who are not requesting huge_pages through the postgrescluster but whose Kubernetes nodes have huge_pages enabled because PG will try to use those huge_pages anyway. As long as you're requesting huge_pages, it shouldn't concern you, but it's an interesting issue that came up and that we're addressing in the two PRs already mentioned in this thread.)

If you do continue to see a need to mount additional volumes, please reopen and provide additional information about your specific Kubernetes environment and operator Deployment (e.g. Kubernetes version, PGO version, etc.)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants