Kubernetes RBAC doesn't permit requests on subresources unless a resources's subresource is explicitly named, or uses a wildcard (*
).
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: get-services
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- "get"
- "list"
The above policy does not allow port forwarding (ex: kubectl port-forward -n kube-system svc/kube-dns
), because no port-forward
subresource was specified, even though the apiGroup, verb, and resource matched the reqeust.
RBAC would require the following policy to permit a port-forward request.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: service-port-forwarder
rules:
- apiGroups:
- ""
resources:
- services/portforward
verbs:
- "get"
Because subresources are modeled as an entity attribute in Cedar, the following policy does permit port-forward requests to services, because the verb, apiGroup and resource type match.
permit (
principal in k8s::Group::"read-only-group",
action in [k8s::Action::"get", k8s::Action::"list"],
resource is k8s::Resource
) when {
resource.apiGroup == "" && resource.resource == "services"
};
In order to allow service listing while preventing subresources like port-forward, a condition excluding subresources is required:
permit (
principal in k8s::Group::"read-only-group",
action in [k8s::Action::"get", k8s::Action::"list"],
resource is k8s::Resource
) when {
resource.apiGroup == "" && resource.resource == "services"
} unless {
resource has subresource
};
Cedar's Rust implementation and CLI gained support for entity tags (key/value maps) in Cedar v4.2.0.
Until cedar-go supports entity tags, we've manually added KeyValue
and KeyValues
types into the meta::v1
namespace to support key/value labels.
Any Kubernetes types in admission that consist of map[string]string{}
or map[string][]string{}
are converted to a Set of KeyValue or KeyValueStringSlice.
namespace meta::v1 {
type KeyValue = {
"key": __cedar::String,
"value"?: __cedar::String
};
type KeyValueStringSlice = {
"key": __cedar::String,
"value"?: Set < __cedar::String >
};
entity ObjectMeta = {
"annotations"?: Set < meta::v1::KeyValue >,
"labels"?: Set < meta::v1::KeyValue >,
"name"?: __cedar::String,
"namespace"?: __cedar::String,
// ...
};
// ...
}
Similarly, the Authorization namespace k8s::
includes a custom Extra
type to support key/value maps on Users, ServiceAccounts, and Nodes.
namespace k8s {
type Extra = {
"key": __cedar::String,
"values"?: Set < __cedar::String >
};
entity User in [Group] = {
"extra"?: Set < Extra >,
"name": __cedar::String
};
// ...
}
A core tenet of Cedar is to be analyzable, meaning that the language can verify that a policy is valid and will not error.
A general map
/filter
function on dynamic inputs and ordered lists are not analyzible, and not a candidate for Cedar.
This prevents specifically checking subfields over sets of structures, which is a common Kubernetes policy management requirement.
Cedar is powered by automated reasoning, including an SMT solver, which does not implement loops or map functions.
Rather than viewing Cedar as a replacement for admission restrictions tools like Open Policy Agent/Gatekeeper or Kyverno, it is best seen as an additional tool for access control enforcement.
Suppose you had a Cedar policy that permitted a principal to take one action in authorization, but had restrictions enforced as forbid policies in admission. You decide for whatever reason you need to revoke that whole policy, so you remove that policy from your cluster. If the identity you had granted authorization for was making a request at the same moment you removed the policy, it would be possible that the request could be permitted by the authorization webhook before the revocation was processed, and the denial removal in admission could be propagated before the Kubernetes API invokes the admission webhook, which would open the possibility of letting through a request that should have been forbidden in admission.
For now this is a limitation of Kubernetes that requires upstream work to resolve.
For now, the policy store is a flat list of all policies as defined by CRDs in a cluster. Any forbid policy will take precedence over an allow. Right now, its possible for a user to write a policy that forbids all requests in a cluster:
forbid (
principal,
action,
resource
);
To guard against this, we'll likely add support for multiple tiers of policy stores, potentially from policy stores outside the cluster.
In each tier, if no explicit permit
or forbid
applies to a request, the authorizer would progress to the next tier.
Kubernetes enables users to list their permissions, and that functionality is closely tied to RBAC's policy rule implementation. Kubernetes can interrogate built-in authorizers for all permission rules that apply to a user:
kubectl auth can-i --list
# Warning: the list may be incomplete: webhook authorizer does not support user rule resolution
There is currently no plan upstream to expand this to webhook authorizers, which doesn't impact this project, as Cedar cannot enumerate all permissions.
Kubernetes has built-in protections for privilege escalation in RBAC for creating new policies (escalate
verb) and on creating or updating bindings (bind
verb).
It can perform these checks because RBAC includes permission enumeration.
Full policy enumeration is impossible with Cedar, as it supports features like string wildcards.
Cedar could potentially add some basic level of support for privilege escalation, but the topic requires further exploration.