To run this project locally, you'll need to install finch, Go, kubectl, kind, and kubebuilder (if creating/modifying CRDs).
It's required to install kind
version v0.24.0
(or later), in order to be compatible with Kubernetes v1.31
and Finch.
To install kind
you can use go
or the package manager available in your O.S. We'll be covering brew
in this example, but you can find more installation options here.
go install sigs.k8s.io/[email protected]
# ensure $GOPATH/bin is in your $PATH
kind --version
# kind version 0.24.0
brew install kind
kind --version
# kind version 0.24.0
Use brew
to install Finch, other installation options can be found here.
brew install --cask finch
finch --version
# finch version v1.4.1
After the installation, it's required that Finch VM is initialized. Then ensure you have a finch
VM build.
finch vm init
# INFO[0000] Initializing and starting Finch virtual machine...
# INFO[0049] Finch virtual machine started successfully
finch vm status
# Running
If already have a VM initialized in the past, you may need to just start it.
finch vm start
# INFO[0000] Starting existing Finch virtual machine...
# INFO[0019] Finch virtual machine started successfully
finch vm status
# Running
-
Clone this repository to your local environment or IDE.
git clone https://github.com/awslabs/cedar-access-control-for-k8s.git cd cedar-access-control-for-k8s
-
For an optional local build of the binaries, you can run:
make build
If you encounter an error related to
goproxy
like the one below, try exporting the following environment variable.# go: sigs.k8s.io/controller-tools/cmd/[email protected]: sigs.k8s.io/controller-tools/cmd/[email protected]: Get "https://proxy.golang.org/sigs.k8s.io/controller-tools/cmd/controller-gen/@v/v0.14.0.info": dial tcp: lookup proxy.golang.org: i/o timeout export GOPROXY=direct
-
Start the Kind cluster. This will build the webhook image, the Kind image, and create the Kind cluster. This cluster is configured to authorize and validate requests via the Cedar webhook:
make kind
-
(Optional) Create additional policies. There's an example in
demo/authorization-policy.yaml
that is auto-created, but feel free to modify it or create more# edit demo/authorization-policy.yaml kubectl apply -f demo/authorization-policy.yaml
-
Generate a
kubeconfig
for a test user. The user has the nametest-user
with the grouptest-group
.make test-user-kubeconfig # Lookup the username of the test user KUBECONFIG=./mount/test-user-kubeconfig.yaml kubectl auth whoami # ATTRIBUTE VALUE # Username test-user # Groups [viewers test-group system:authenticated]
-
Now you can make requests! You'll need to use the generated
kubeconfig
in./mount/test-user-kubeconfig.yam
created in the previous step. Your defaultkubeconfig
(~/.kube/config
) will be autoconfigured by kind with a cluster administrator identity, sokubectl
without specifying akubeconfig
should always just work.Let's test both
kubeconfig
files to validate if our setup is working.Try getting resources like Pods and Nodes.
KUBECONFIG=./mount/test-user-kubeconfig.yaml kubectl get pods --all-namespaces # allowed KUBECONFIG=./mount/test-user-kubeconfig.yaml kubectl get nodes # denied
As
cluster-admin
, list Secrets and Nodes.kubectl get nodes kubectl get secrets --show-labels
Try listing Secrets with the
test-user
.KUBECONFIG=./mount/test-user-kubeconfig.yaml kubectl get secrets # denied
-
Try out the scenarios on the Demo for different policies for authorization access and admission controls.
For tearing down the Kind cluster.
make clean-kind
And to cleanup the Finch VM.
finch vm stop
# INFO[0000] Stopping existing Finch virtual machine...
# INFO[0005] Finch virtual machine stopped successfully
finch vm remove
# INFO[0000] Removing existing Finch virtual machine...
# INFO[0000] Finch virtual machine removed successfully