diff --git a/src/content/tutorials/security/policy-api/index.md b/src/content/tutorials/security/policy-api/index.md index b33e19afb2..f4f5487b37 100644 --- a/src/content/tutorials/security/policy-api/index.md +++ b/src/content/tutorials/security/policy-api/index.md @@ -16,3 +16,95 @@ last_review_date: 2024-11-28 owner: - https://github.com/orgs/giantswarm/teams/team-shield --- + +__Note__: This guide is intended for cluster administrators running Giant Swarm managed Kubernetes clusters. More general information about Pod Security Standards can be found on the [Security policy enforcement][sec-policy-enforcement] page. + +## Managing cluster security policies with the Giant Swarm Policy API + +Policy API is an abstraction layer that orchestrates other types of policy-related resources. + +It is intended to be used by cluster administrators and developer platforms to configure and automate management of various policy enforcement tools in Kubernetes clusters. + +### At a glance + +The Policy API: + +- is an interface for configuring the various types of (mostly security-related) policies that Giant Swarm manages. +- provides a way for cluster administrators to declare their intent about which policies to enforce and which resources are exempt from those policies. +- is intended to manage additional policy types in the future, including networking, vulnerability management, anomaly detection, and others. +- is _not_ a general purpose policy syntax language. Users cannot define custom policies via the Policy API. It can only be used to configure the policies Giant Swarm actively manages. +- does _not_ hide the underlying implementations. Users are free to directly use the underlying tools or APIs. The only difference is that _Giant Swarm will not manage, migrate, or adopt any policies or configuration you create using the tools' native resources_. +- generates native resources for the underlying implementations. These resources continue to function even if the Policy API controllers are removed. + +### Working with Policy API + +Users interact primarily with two types of resources: Policies and PolicyExceptions. + +#### Policies + +Giant Swarm Policy resources enable cluster administrators to control how policies are applied to their clusters. + +Currently, only support for Pod Security Standards policies has been implemented. The policy names and descriptions are documented on our [security policy enforcement page][sec-policy-enforcement]. + +This section will be updated as we implement new policy options. + +#### PolicyExceptions + +The PolicyException type allows cluster administrators (and other users to whom admins have delegated this capability) to identify resources which are exempt from one or more policies. + +For example, this `PolicyException` allows the `special` Deployment in the `sample` namespace to be admitted to the cluster even if it fails two named security policies: + +```yaml +apiVersion: policy.giantswarm.io/v1alpha1 +kind: PolicyException +metadata: + name: my-workload-exceptions + namespace: my-namespace +spec: + policies: + - disallow-host-path + - restrict-volume-types + targets: + - kind: Deployment + namespaces: + - sample + names: + - special* +``` + +Based on this exception, the Policy API controllers will generate additional resources and make configuration changes to any tools which enforce the listed policies. + +### Motivation / Historical note + +The Giant Swarm platform is built upon a number of independent tools, projects, and APIs supported by the CNCF and the surrounding Kubernetes ecosystem. +Among these are a number of capabilities designed for enforcing policies within a cluster. These include built-in types like Network Policies and RBAC as well as external CRDs like Kyverno Cluster Policies or Cilium Network Policies, among others. + +We use and manage tools we believe are the "right tool for the job" and add value for customers. +Over time, however, the "right tool" may change. +It can be difficult to keep up with so many rapidly evolving projects (think of all the alpha or beta version APIs currently in production!), and the simple reality is that many teams don't care what the tool is as long as their needs are met. +By decoupling our customers' intent from the underlying tooling, we can automate much of the migration work needed to transition between policy implementations. + +When Pod Security Policies were removed, for example, many workloads had to be re-evaluated and new exceptions created for them, even though neither the workload nor the inherent risk had changed. +Much effort was spent maintaining feature parity and avoiding security regressions as clusters upgraded to v1.25. +Staying up to date is an important part of maintaining a system's security posture, but many organizations could not keep up with the PSP deprecation work, so simply stopped enforcing Pod-level security policies. + +The first implementation of the Giant Swarm Policy API was created to help our customers migrate automatically (to the extent possible) from PSP to a feature-equivalent implementation of Pod Security Standards without having any policy enforcement coverage gaps during the migration. +We expect that Kubernetes and third-party tooling will continue to evolve, and that we can help our customers be faster if they are not directly tied to tool-specific interfaces which they don't actually want to manage. + +So, we created the Policy API in order to allow Giant Swarm to more seamlessly and transparently move clusters between policy implementations, and to reduce the overall toil of dealing with common security configuration. + +### Managed versus un-managed policies + +Giant Swarm provides a set of ready-made policies for many common cluster management use cases. The Policy API only orchestrates these standard policies, which Giant Swarm actively manages. It does not interfere with customer policies, exceptions, or configurations that are managed externally to the Policy API. + +For example, Giant Swarm enforces security policies in every cluster by default, and advises customers to use the Policy API to declare exceptions for any workloads that need them. +We currently use Kyverno to enforce those policies, and the Policy API generates Kyverno PolicyExceptions based on the exceptions configured through the Policy API. + +A cluster administrator might choose to use the pre-installed, managed Kyverno instance to enforce their own additional policies, for instance to enforce some business-specific logic. +They can easily do that by creating a new Kyverno `ClusterPolicy`, and creating Kyverno `PolicyExceptions` for any approved exceptions. + +If, in the future, Giant Swarm were to choose to stop managing Kyverno as part of our standard platform, we would use our Policy API controllers to move our managed policies and any relevant exceptions into a new implementation that maintains the desired behavior. + +The custom ClusterPolicy, and any configured Kyverno PolicyExceptions, would need to be adapted by the cluster administrator, or they would need to then manage Kyverno themselves. + +[sec-policy-enforcement]: {{< relref "/tutorials/security/policy-enforcement" >}} diff --git a/src/content/tutorials/security/policy-enforcement/_index.md b/src/content/tutorials/security/policy-enforcement/_index.md index 1ce9d659f1..fc2dcf3d71 100644 --- a/src/content/tutorials/security/policy-enforcement/_index.md +++ b/src/content/tutorials/security/policy-enforcement/_index.md @@ -13,12 +13,49 @@ user_questions: - How can I exclude a workload from a Kyverno policy? - What security policies are enforced in my cluster? - What are Pod Security Standards (PSS)? + - How can I give my container permission to access a persistent volume? + - How can I run a container as a certain user? + - How can I run a container as privileged? + - Why is my container lacking permission to use a persistent volume? last_review_date: 2024-11-29 +mermaid: true owner: - https://github.com/orgs/giantswarm/teams/team-shield --- -To enforce security best practices, several policies mapped to the [`Kubernetes Pod Security Standards`](https://kubernetes.io/docs/concepts/security/pod-security-standards/) (PSS) are pre-installed in Giant Swarm clusters. +Giant Swarm uses Kyverno to enforce the official Kubernetes Pod Security Standards, and provides platform-level automation of common policy management practices. + +## Pod Security Standards + +The Kubernetes maintainers publish a set of policies called the `Pod Security Standards` (PSS), which describe acceptable pod configurations for different levels of risk. The policies apply to several pod and container specification-level fields which have security implications for the workload, and are grouped into three increasingly restrictive levels. + +The `baseline` policy level limits the capabilities a pod can use to a limited subset (~14 possibilities). The `restricted` level takes this policy further and requires each pod to explicitly drop all capabilities in its Pod spec, and allows adding back only a single capability (`NET_BIND_SERVICE`). The least restrictive level, `privileged`, is a no-op policy which doesn't perform any validation or impose any rules. Refer to the [official Pod Security Standards docs][k8s-pss] for more information. + +The `Pod Security Standards` only provide a set of suggested policies intended to be enforced by other implementations of actual controls which validate pods against the `PSS` rules. + +### Pod Security Admission + +The Kubernetes API includes feature called `Pod Security Admission` (PSA), which is a specific implementation of a technical control for the `Pod Security Standards`. It's an admission controller which is built into the API server, and can be configured using labels on cluster namespaces. + +Due to perceived limitations in the initial implementation of `PSA`, Giant Swarm uses an external admission controller to enforce these policies instead of the Kubernetes built-in PSA. + +To learn more about built-in `PSA`, please refer to the [upstream Kubernetes Pod Security Admission documentation][k8s-psa]. You can read [a blog post outlining our decision not to use PSA][gs-psp-blog]. + +### Pod Security Standards with Kyverno + +Instead of the `Pod Security Admission` controller, Giant Swarm clusters use Kyverno along with a set of Kyverno cluster policies which map to the `Pod Security Standards`. + +{{< mermaid >}} +flowchart TD + A["Pod Security Standards (PSS)"] -->|Enforced by| B("Kyverno
(outside API Server)") + A --> |Enforced by| C("Pod Security Admission (PSA)
(inside API Server)") +{{< /mermaid >}} + +By default, Giant Swarm clusters enforce the `restricted` level `Pod Security Standards`. This level aligns with our "secure by default" principle, and is intended to ensure our clusters are hardened according to community best practices. + +Cluster administrators have complete flexibility and control over their policy enforcement, and may choose to ease their security requirements or enforce additional policies to suit their risk tolerance and business needs. + +To enforce security best practices, several policies mapped to the [Kubernetes `Pod Security Standards`][k8s-pss] (PSS) are pre-installed in Giant Swarm clusters. These policies validate `Pod` and `Pod` controller (`Deployment`, `DaemonSet`, `StatefulSet`) resources and deny admission of the resource if it doesn't comply. Individual policies forbid deploying resources with various kinds of known risky configurations, and require some additional defensive options to be set in order to reduce the likelihood and/or impact of a workload becoming compromised. @@ -26,18 +63,18 @@ Users who are unaware of those requirements may be surprised when their workload ### Kyverno -Giant Swarm clusters currently use `Kyverno` to perform the actual enforcement of the policies we manage. -Our [Policy API]({{< relref "/tutorials/security/policy-api" >}}), along with other platform internals, manage the `Kyverno` cluster policy resources as well as any necessary `Kyverno` policy exceptions. +Giant Swarm clusters currently use Kyverno to perform the actual enforcement of the policies we manage. +Our [Policy API]({{< relref "/tutorials/security/policy-api" >}}), along with other platform internals, manage the Kyverno cluster policy resources as well as any necessary Kyverno policy exceptions. -`Kyverno` is an admission controller, which inspects incoming requests to the `Kubernetes` API server and checks them against configured policies. `Kyverno` policies can be configured in two modes: `audit` and `enforce`. +Kyverno is an admission controller, which inspects incoming requests to the Kubernetes API server and checks them against configured policies. Kyverno policies can be configured in two modes: `audit` and `enforce`. -In `audit` mode, `Kyverno` won't reject admission of a resource even if it fails the policy. It will instead create a report and add an `Event` to the resource indicating that the resource has failed the policy. In `enforce` mode, `Kyverno` will block the creation of a resource if it fails a policy. No report or event will be created, because the resource will never exist in the cluster. +In `audit` mode, Kyverno won't reject admission of a resource even if it fails the policy. It will instead create a report and add an `Event` to the resource indicating that the resource has failed the policy. In `enforce` mode, Kyverno will block the creation of a resource if it fails a policy. No report or event will be created, because the resource will never exist in the cluster. -By default, `Kyverno` will periodically re-scan all existing resources in a cluster and generate reports about their compliance. +By default, Kyverno will periodically re-scan all existing resources in a cluster and generate reports about their compliance. Resources which fail a policy will receive an `Event` similar to the example below, indicating which policy has failed. These events are useful for evaluating which resources are affected by a policy or potential policy change. If a resource has these warning events for a given policy, it means that the resource would be rejected if that policy were to change to `enforce` mode. -Much more extensive documentation about `Kyverno` configuration and policy behavior is available [in the official docs](https://kyverno.io/docs/). +Much more extensive documentation about Kyverno configuration and policy behavior is available [in the official docs][kyverno-docs]. ### Sample policy warnings @@ -100,7 +137,7 @@ spec: ## Common pitfalls -- The `PSS` policies described here apply to Pods as well as their controller types, like `Deployments`, `DaemonSets`, and `StatefulSets`. However, cluster administrators can deploy additional policies which apply to any arbitrary`Kubernetes` resource type, like `Services`, `ConfigMaps`, etc. For that reason, this guide often uses the term "resource" instead of `Pod` when referring to the object being targeted by a `Kyverno` policy. +- The `PSS` policies described here apply to Pods as well as their controller types, like `Deployments`, `DaemonSets`, and `StatefulSets`. However, cluster administrators can deploy additional policies which apply to any arbitrary Kubernetes resource type, like `Services`, `ConfigMaps`, etc. For that reason, this guide often uses the term "resource" instead of `Pod` when referring to the object being targeted by a Kyverno policy. - Many policies target configuration set at the container level, so *all* containers in a `Pod` must satisfy each policy, including `init` and `ephemeral` containers. - Some policies contain multiple rules. Resources must be compliant with *all* of the rules in order to pass validation by that policy. - Many policies are satisfied if the fields they target are simply omitted or left unset. However, some `restricted` level policies require that the resource explicitly sets a particular value. It may be necessary to add new content to an existing resource in order to make it compliant. @@ -1046,7 +1083,7 @@ __Note__: under most circumstances, only a cluster administrator will be able to To exclude a workload from a policy, create a `PolicyException` resource for that workload-policy combination. -There are two ways to do this, depending on your cluster administrators' policy management preferences: via the Giant Swarm `Policy` API, or via native `Kyverno` resources. +There are two ways to do this, depending on your cluster administrators' policy management preferences: via the Giant Swarm `Policy` API, or via native Kyverno resources. ### Configuring exceptions with Policy API @@ -1080,13 +1117,13 @@ This example allows a `Deployment` (and the `ReplicaSet` and `Pods` it creates) __Note__: creating a many-to-many exception (multiple targets excluded from multiple policies) isn't currently permitted. Either `policies` or `targets` must contain exactly one entry. -Various `Policy` API components watch these resources and make the corresponding changes in supported any lower-level policies. `PSS` policies are currently enforced using `Kyverno`, so when this Giant Swarm `PolicyException` is created, the Policy API controllers will ensure that a corresponding `Kyverno` policy exception is created or updated to exclude the workload from the named policies. +Various `Policy` API components watch these resources and make the corresponding changes in supported any lower-level policies. `PSS` policies are currently enforced using Kyverno, so when this Giant Swarm `PolicyException` is created, the Policy API controllers will ensure that a corresponding Kyverno policy exception is created or updated to exclude the workload from the named policies. For policies which are enforced or audited by multiple distinct tools, a Giant Swarm `PolicyException` can be used to declaratively configure all of the underlying implementations simultaneously. ### Configuring exceptions with Kyverno -Cluster administrators may prefer to manage exceptions themselves. In this case, it's necessary to create the underlying `Kyverno` policy exception directly. +Cluster administrators may prefer to manage exceptions themselves. In this case, it's necessary to create the underlying Kyverno policy exception directly. There are different ways to structure a `PolicyException`, and your cluster administrator may have a preferred format. @@ -1125,7 +1162,12 @@ This example allows a `Deployment` (and the `ReplicaSet` and `Pods` it creates) Noteworthy pieces of this example: -- `Kyverno` policy rules are usually written at the `Pod` level. For convenience, `Kyverno` automatically generates equivalent rules for `Pod` controllers like `Deployments` and `DaemonSets`. Such rules are prefaced with the value `autogen-` and added to the policy automatically (two such rules are visible in the example). When writing a `PolicyException`, any applicable `autogen` rules must also be listed if a workload should be exempt from them. +- Kyverno policy rules are usually written at the `Pod` level. For convenience, Kyverno automatically generates equivalent rules for `Pod` controllers like `Deployments` and `DaemonSets`. Such rules are prefaced with the value `autogen-` and added to the policy automatically (two such rules are visible in the example). When writing a `PolicyException`, any applicable `autogen` rules must also be listed if a workload should be exempt from them. - Similarly, when listing resource kinds to be matched in a `PolicyException`, every sub-resource controller must be listed as well. For example: If a `Policy` is written at the `CronJob` level (and `autogen` policies are enabled for it), then the `Job` and `Pod` resources created from the `CronJob` also need to be explicitly matched in the `PolicyException`. The same happens with Deployments, where the `ReplicaSet` and `Pod` controllers will need to be excluded as well. - A policy can contain multiple rules -- exceptions can be applied to individual rules so that the others remain in effect. Here, the workload is allowed to fail the `host-path` and `restricted-volumes` rules (and their automatically generated equivalents). A workload is only exempt from the rules listed in a `ruleNames` list. If a policy contains other rules not listed in the `PolicyException`, and the workload doesn't satisfy those rules, the workload will be rejected. - Cluster administrators can choose the namespace where `PolicyExceptions` are stored. The correct namespace for a `PolicyException` might be different than the namespace for the `Pod` itself. + +[gs-psp-blog]: https://www.giantswarm.io/blog/giant-swarms-farewell-to-psp +[k8s-psa]: https://kubernetes.io/docs/concepts/security/pod-security-admission/ +[k8s-pss]: https://kubernetes.io/docs/concepts/security/pod-security-standards/ +[kyverno-docs]: https://kyverno.io/docs/ diff --git a/src/content/tutorials/security/rbac/_index.md b/src/content/tutorials/security/rbac/_index.md index a21839cb78..71977fae69 100644 --- a/src/content/tutorials/security/rbac/_index.md +++ b/src/content/tutorials/security/rbac/_index.md @@ -1,7 +1,7 @@ --- linkTitle: Cluster access control title: Cluster access control with RBAC and Pod Security Standards -description: Introduction to using role-based access control (RBAC) and pod security standards (PSS) to secure your cluster and manage access control. +description: Introduction to using role-based access control (RBAC) to secure access to cluster resources. weight: 10 menu: principal: @@ -9,19 +9,15 @@ menu: identifier: tutorials-security-rbac user_questions: - How can I add permissions to a service account? - - How can I give my container permission to access a persistent volume? - - How can I run a container as a certain user? - - How can I run a container as privileged? - How can I specify which permissions are associated with a key pair? - Why are my containers failing to access some resources? - - Why is my container lacking permission to use a persistent volume? owner: - https://github.com/orgs/giantswarm/teams/team-shield last_review_date: 2024-11-28 mermaid: true --- -Two of the most central mechanisms to secure your cluster in `Kubernetes` are `Role Based Access Control` (RBAC) and `Pod Security Standards` (PSS). Together, they allow you to create fine-grained roles and policies to manage access control for users and software running on your cluster. Both are enabled by default on Giant Swarm clusters. +Role Base Access Control (RBAC) is the primary authorization mechanism for managing access to cluster resources in Kubernetes. It is enabled and configured by default on Giant Swarm clusters, and we support common automation use cases through additional platform capabilities described in our [Platform Access Management section][platform-access-management]. ## Role based access control @@ -209,11 +205,11 @@ For a detailed explanation of how to refer to subjects in bindings you can read #### Default role bindings {#default-roles-bindings} -Your `Kubernetes` cluster comes by default with a set of roles and cluster roles as well as some default bindings. These are automatically reconciled and thus can't be changed or deleted. +Your cluster comes by default with a set of roles and cluster roles as well as some default bindings. These are automatically reconciled and thus can't be changed or deleted. You can use the `Role` and `ClusterRole` resources to create bindings for your users. Following example would grant all users in the group `mynamespace-admin` full permissions to resources in the `mynamespace` namespace. -See how it references a `ClusterRole` named `admin`, which comes with `Kubernetes` by default. +See how it references a `ClusterRole` named `admin`, which comes with Kubernetes by default. ```yaml kind: ClusterRoleBinding @@ -233,11 +229,11 @@ roleRef: ##### A super-user role binding -One of the most important default role bindings is for the `cluster-admin` role, which depicts a super-user in the cluster. By default, it's bound to the `system:masters` group. Thus, if you need cluster admin access to your `Kubernetes` cluster, you need to [generate user credentials]({{< relref "/getting-started/access-to-platform-api" >}}) that includes the group. +One of the most important default role bindings is for the `cluster-admin` role, which represents a super-user in the cluster. By default, it's bound to the `system:masters` group. If cluster admin access is required, [user credentials can be generated]({{< relref "/getting-started/access-to-platform-api" >}}) that explicitly include that group. For a complete overview of default roles and bindings you can read the [official RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings). -__Warning:__ Be careful assigning super-user as a default role. Giving `cluster-admin` role to every user means letting them perform any action in the cluster. As an analogy, it's like you giving root access to every user in a Linux system. Consequently think twice which role your users will have in the system. For `Kubernetes`, it translates in selecting a proper username and group name. Read [the authentication documentation]({{< relref "/overview/architecture/authentication/" >}}) to know more. +__Warning:__ Consider the principle of least privilege and be careful assigning super-user as a default role. Giving `cluster-admin` role to every user means letting them perform any action in the cluster. Using the `cluster-admin` role by default in a Kubernetes cluster is analogous to giving root access to every user in a Linux system. Consider whether a binding to `cluster-admin` is truly necessary, or if a more minimal role can be bound instead. Read [the platform access management documentation][platform-access-management] to know more. ### Verifying if you Have Access @@ -274,41 +270,9 @@ $ kubectl auth can-i get pods --as-group=system:masters yes ``` -## Pod Security Standards - -The Kubernetes maintainers publish a set of policies called `Pod Security Standards` (PSS), which describe acceptable pod configurations for different levels of risk. The policies apply to several pod and container specification-level fields which have security implications for the workload, and are grouped into three increasingly restrictive levels. - -For example, the `baseline` policy level limits the capabilities a pod can use to a limited subset (~14 possibilities). The `restricted` level takes this policy further and requires each pod to explicitly drop all capabilities in its Pod spec, and allows adding back only a single capability (`NET_BIND_SERVICE`). The least restrictive level, `privileged`, is a no-op policy which doesn't perform any validation or impose any rules. Refer to the [official Pod Security Standards docs](https://kubernetes.io/docs/concepts/security/pod-security-standards/) for more information. - -The `Pod Security Standards` provide only a set of suggested policies intended to be enforced by other implementations of actual controls which validate pods against the `PSS` rules. - -### Pod Security Admission - -The `Kubernetes` API includes a built-in admission controller called `Pod Security Admission` (PSA), which a specific implementation of a technical control for the `Pod Security Standards`. It's an admission controller which is built into the API server, and can be configured using labels on cluster namespaces. - -Due to perceived limitations in the initial implementation of `PSA`, Giant Swarm uses an external admission controller to enforce these policies instead of the `Kubernetes` built-in PSA. - -To learn more about built-in `PSA`, please refer to the [upstream Kubernetes Pod Security Admission documentation](https://kubernetes.io/docs/concepts/security/pod-security-admission/). You can read [a blog post outlining our decision not to use PSA](https://www.giantswarm.io/blog/giant-swarms-farewell-to-psp). - -### Pod Security Standards with Kyverno - -Instead of the `Pod Security Admission` controller, Giant Swarm clusters use `Kyverno` along with a set of Kyverno cluster policies which map to the `Pod Security Standards`. - -{{< mermaid >}} -flowchart TD - A["Pod Security Standards (PSS)"] -->|Enforced by| B("Kyverno
(outside API Server)") - A --> |Enforced by| C("Pod Security Admission (PSA)
(inside API Server)") -{{< /mermaid >}} - -By default, Giant Swarm clusters enforce the `restricted` level `Pod Security Standards`. This level aligns with our "secure by default" principle, and is intended to ensure our clusters are hardened according to community best practices. - -Cluster administrators have complete flexibility and control over their policy enforcement, and may choose to ease their security requirements or enforce additional policies to suit their risk tolerance and business needs. - -[A detailed guide to working with `Kyverno` PSS policies and exceptions]({{< relref "/tutorials/security/policy-enforcement" >}}) is available as a standalone resource. - ## User management -Though our recommendation is to integrate your `Identity Provider` with the platform API, you can also manage users using certificates and `RBAC` bindings. +Although our recommendation is to integrate your Identity Provider with the platform API, it is also possible to manage users using certificates and RBAC bindings. ### Using common name as username @@ -320,17 +284,17 @@ Setting a `Common Name` prefix results in a username like the following: where `` is a username of your choice and `` is your cluster's domain, for example `w6wn8.k8s.example.eu-central-1.aws.gigantic.io`. -When binding roles to a user you need to use the full username mentioned above. +When binding roles to a user, the full username must be used, in the format above. ### Using organizations -Organizations you set when creating `key-pairs` get mapped to groups inside `Kubernetes`. You can then assign roles to a whole group of users. A user can be part of multiple groups and thus be assigned multiple roles, too. +Organizations you set when creating `key-pairs` get mapped to groups inside Kubernetes. This allows role assignment to whole groups of users. A user can be part of multiple groups and thus be assigned multiple roles, the permissions of which are additive. -There is only a single predefined user group inside Kubernetes. Members of the `system:masters` group will directly be assigned the default `cluster-admin` role inside your cluster, which is allowed to do anything. Our recommendation is to only use this kind of user, when bootstrapping security settings for the cluster. +There is only a single predefined user group inside Kubernetes. Members of the `system:masters` group will directly be assigned the default `cluster-admin` role, which is allowed to do anything. Our recommendation is to only use this kind of user when bootstrapping security settings for the cluster. ### Default settings -The `` defaults to the email you sign in with at Giant Swarm. The default organization is empty. Thus, a user that's created without any additional `CN` prefix and/or Organizations won't have any rights on the cluster unless you bind a role to their specific username. +The `` defaults to the email you sign in with at Giant Swarm. The default organization is empty. Thus, a user that's created without any additional `CN` prefix and/or Organizations won't have any rights on the cluster unless a role to their specific username is bound in the cluster. ## Bootstrapping and managing access rights @@ -349,13 +313,13 @@ kubectl gs login $PLATFORM_API \ This will create a `kubeconfig` with a `cluster-admin` user that's valid for a three hours (as you don't want to yield that power for too long). -With this user you can now start creating roles and bindings for your users and apps. Let's go through some examples. +With this user, you can now start creating roles and bindings for your users and apps. Let's go through some examples. -Note that in these examples you are assuming you are creating users through platform API. If you have plugged in your `Identity Provider` you can skip the user creation step and directly bind roles to your users. +Note that these examples assume you are creating users through platform API. If you have connected an identity provider, you can skip the user creation step and directly bind roles to your users. ### Giving admin access to a specific namespace -There is a default admin role defined inside `Kubernetes`. You can bind that role to a user or group of users for a specific namespace with a role binding similar to following. +There is a default admin role defined inside Kubernetes. You can bind that role to a user or group of users for a specific namespace with a role binding similar to following. ```yaml kind: RoleBinding @@ -416,7 +380,7 @@ roleRef: apiGroup: rbac.authorization.k8s.io ``` -Above YAML gives view rights to the whole cluster to both the user with `CN` prefix `jane` and all users within the `cluster-view` group. +The above YAML gives view rights over the whole cluster to both the user with `CN` prefix `jane` and all users within the `cluster-view` group. Let's assume you have already created both users from the example above. As Jane's username hasn't changed, she automatically gets the new rights using her existing credentials. @@ -431,13 +395,13 @@ kubectl gs login $PLATFORM_API \ --certificate-group "dev-admin, cluster-view" ``` -With the above Marc would now be part of both groups and thus be bound by both bindings. +With the above, Marc would now be part of both groups and thus be bound by both bindings. ### Running applications with API access -Applications running inside your cluster that need access to the `Kubernetes` API need the right permissions bound to them. For this the Pods need to use a `ServiceAccount`. +Applications running inside your cluster that need access to the Kubernetes API need the right permissions bound to them. For this, Pods need to use a `ServiceAccount`. -The typical process looks like following example: +The typical process looks like the following: #### 1. Create a Service Account for your app {#app-api-create-sa} @@ -451,7 +415,7 @@ metadata: #### 2. Add the Service Account to your app {#app-api-add-sa} -This is done by adding a line referencing the `ServiceAccount` to the `Pod` spec of your `Deployment` or `Daemonset`. To be sure you have the right place in the YAML you can put it right above the line `containers:` at the same indentation level. The section should look similar to following: +This is done by adding a line referencing the `ServiceAccount` to the `Pod` spec of your `Deployment` or `DaemonSet`. To be sure you have the right place in the YAML you can put it right above the line `containers:` at the same indentation level. The section should look similar to following: ```yaml [...] @@ -459,6 +423,7 @@ spec: template: metadata: name: fluentd + namespace: logging labels: component: fluentd spec: @@ -507,4 +472,8 @@ You can revoke access from any user or group of users by either completely remov Note that bindings that come with the cluster by default like `system:masters` can't be removed as they're reconciled. Our team highly recommend to use OIDC integration to manage users and groups. Otherwise, you rely on short-lived users access (for example certificates with a TTL of a day or less) as optional security measure. +__Warning:__ certificates with bindings to built-in groups like `system:masters` with no expiration can only be revoked by rotating the root certificate authority for the entire cluster, which can be very disruptive to workloads and external resource access. For this reason, we strongly recommend using alternative groups and bindings even for administrative purposes. + Learn more about [policies]({{< relref "/tutorials/security/policy-enforcement" >}}) and how to enforce security them through the platform. + +[platform-access-management]: {{< relref "/tutorials/access-management" >}}