You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to run the cloudnative-pg operator in a namespace on an OpenShift cluster with the helm chart. My user has the project admin role and the cloudnative-pg-admin role. The CRDs and the clusterroles are provided through the plattform team.
In the values.yaml I deactivated the admissionConfigs and the clusterwide watch. Also RBAC is deactivated to prevent the installation of the clusterroles.
The installation via helm was successfully! But after that the operator pod itself fails in the CrashLoopBackoff. After a short look in the logs I could see that the operator tries to create admissionConfigs
{"level":"info","ts":"2024-12-09T16:23:42.6716422Z","logger":"setup","msg":"Starting CloudNativePG Operator","version":"1.24.1","build":{"Version":"1.24.1","Commit":"3f96930d","Date":"2024-10-16"}}
{"level":"info","ts":"2024-12-09T16:23:42.671803415Z","logger":"setup","msg":"Listening for changes","watchNamespaces":["test-cloudnative-pg"]}
{"level":"info","ts":"2024-12-09T16:23:42.672635136Z","logger":"setup","msg":"Loading configuration from ConfigMap","namespace":"test-cloudnative-pg","name":"cnpg-controller-manager-config"}
{"level":"info","ts":"2024-12-09T16:23:42.686428846Z","logger":"setup","msg":"Operator configuration loaded","configuration":{"webhookCertDir":"","pluginSocketDir":"/plugins","watchNamespace":"test-cloudnative-pg","operatorNamespace":"test-cloudnative-pg","operatorPullSecretName":"cnpg-pull-secret","operatorImageName":"docker-dev.art.strive.bamf.in.bund.de/bamf/bdop/cloudnative-pg/cloudnative-pg:1.24.1","postgresImageName":"ghcr.io/cloudnative-pg/postgresql:17.0","inheritedAnnotations":null,"inheritedLabels":null,"monitoringQueriesConfigmap":"cnpg-default-monitoring","monitoringQueriesSecret":"","enableInstanceManagerInplaceUpdates":false,"enableAzurePVCUpdates":false,"certificateDuration":90,"expiringCheckThreshold":7,"createAnyService":false}}
{"level":"info","ts":"2024-12-09T16:23:42.691689619Z","logger":"setup","msg":"Kubernetes system metadata","haveSCC":true,"haveVolumeSnapshot":true,"availableArchitectures":[{"GoArch":"amd64"},{"GoArch":"arm64"}]}
{"level":"error","ts":"2024-12-09T16:23:42.770099046Z","logger":"setup","msg":"unable to setup PKI infrastructure","error":"mutatingwebhookconfigurations.admissionregistration.k8s.io \"cnpg-mutating-webhook-configuration\" is forbidden: User \"system:serviceaccount:test-cloudnative-pg:cloudnative-pg\" cannot get resource \"mutatingwebhookconfigurations\" in API group \"admissionregistration.k8s.io\" at the cluster scope","stacktrace":"github.com/cloudnative-pg/machinery/pkg/log.(*logger).Error\n\tpkg/mod/github.com/cloudnative-pg/[email protected]/pkg/log/log.go:125\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/controller.ensurePKI\n\tinternal/cmd/manager/controller/controller.go:395\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/controller.RunController\n\tinternal/cmd/manager/controller/controller.go:217\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/controller.NewCmd.func1\n\tinternal/cmd/manager/controller/cmd.go:42\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/[email protected]/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/[email protected]/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/[email protected]/command.go:1041\nmain.main\n\tcmd/manager/main.go:68\nruntime.main\n\t/opt/hostedtoolcache/go/1.23.2/x64/src/runtime/proc.go:272"}
Expected behaviour:
If I deactivate the admissionConfigs I'd ecpect that the operator don't try to create any admissionConfigs. This obvious is different.
I want to run the cloudnative-pg operator in a namespace on an OpenShift cluster with the helm chart. My user has the project admin role and the cloudnative-pg-admin role. The CRDs and the clusterroles are provided through the plattform team.
In the values.yaml I deactivated the admissionConfigs and the clusterwide watch. Also RBAC is deactivated to prevent the installation of the clusterroles.
The installation via helm was successfully! But after that the operator pod itself fails in the CrashLoopBackoff. After a short look in the logs I could see that the operator tries to create admissionConfigs
Expected behaviour:
If I deactivate the admissionConfigs I'd ecpect that the operator don't try to create any admissionConfigs. This obvious is different.
values.yaml.txt
The text was updated successfully, but these errors were encountered: