Table of Contents
- Introduction
- Pre-requisites
- Federation deployment
- Example application
- Clean up
- What’s next?
- Known Issues
This demo is a simple deployment of Kubernetes Federation v2 on two OpenShift clusters. A sample application is deployed to both clusters through the federation controller.
Federation requires an OpenShift 3.11 cluster and works on both OKD and OpenShift Container Platform (OCP).
This walkthrough will use 2 all-in-one OKD clusters deployed using minishift.
Follow the getting started guide for minishift in the OKD documentation to get minishift installed.
Note: the steps below will create a few entries in the kubectl
/ oc
client
configuration file (~/.kube/config
). If you have an existing client
configuration file that you want to preserve unmodified it is advisable to make
a backup copy before starting.
Your system should have minishift configured and ready to use with your
preferred VM driver and the oc
client to interact with them. You can use the
oc
client bundled with minishift.
The steps in this walkthrough were tested with:
minishift version
minishift v1.28.0+48e89ed
The kubefed2
tool manages federated cluter registration. Download the
0.0.7 release and unpack it into a diretory in your PATH (the
example uses $HOME/bin
):
curl -LOs https://github.com/kubernetes-sigs/federation-v2/releases/download/v0.0.7/kubefed2.tar.gz
tar xzf kubefed2.tar.gz -C ~/bin
rm -f kubefed2.tar.gz
Verify that kubefed2
is working:
kubefed2 version
kubefed2 version: version.Info{Version:"v0.0.7", GitCommit:"83b778b6d92a7929efb7687d3d3d64bf0b3ad3bc", GitTreeState:"clean", BuildDate:"2019-03-19T18:40:46Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Clone the demo code to your local machine:
git clone --recurse-submodules https://github.com/openshift/federation-dev.git
cd federation-dev/
Start two minishift clusters with OKD version 3.11 called cluster1
and
cluster2
. Note that these cluster names are referenced throughout the
walkthrough, so it's recommended that you adhere to them:
minishift start --profile cluster1
minishift start --profile cluster2
Each minishift invocation will generate output as it progresses and will conclude with instructions on how to access each cluster using a browser or the command line:
-- Starting profile 'cluster1'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
[output truncated]
OpenShift server started.
The server is accessible via web console at:
https://192.168.42.184:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin
In order to use the oc
client bundled with minishift, run this to add it to
your $PATH
:
eval $(minishift oc-env)
Cluster-wide federation needs cluster administrator privileges, so switch the
oc
client contexts to use the system:admin
account instead of the default
unprivileged developer
user:
oc config use-context cluster2
oc login -u system:admin
oc config rename-context cluster2 cluster2-developer
oc config rename-context $(oc config current-context) cluster2
And the same for cluster1
:
oc config use-context cluster1
oc login -u system:admin
oc config rename-context cluster1 cluster1-developer
oc config rename-context $(oc config current-context) cluster1
After this our current client context is for system:admin
in cluster1
. The
following commands assume this is the active context:
oc config current-context
oc whoami
cluster1
system:admin
The presence and naming of the client contexts is important because the kubefed2
tool uses them to manage cluster registration, and they are referenced by context name.
Federation target clusters do not require federation to be installed on them at
all, but for convenience we will use one of the clusters (cluster1
) to host
the federation control plane.
The federation controller also needs elevated privileges. Grant cluster-admin level to the default service account of the federation-system project (the project itself will be created soon):
oc create clusterrolebinding federation-admin \
--clusterrole="cluster-admin" \
--serviceaccount="federation-system:default"
Change directory to Federation V2 repo (The repository submodule is already pointing to tag/v0.0.7
):
cd federation-v2/
Create the required namespaces:
oc create ns federation-system
oc create ns kube-multicluster-public
Deploy the federation control plane and its associated Custom Resource Definitions (CRDs):
sed -i "s/federation-v2:latest/federation-v2:v0.0.7/g" hack/install-latest.yaml
oc -n federation-system apply --validate=false -f hack/install-latest.yaml
Deploy the cluster registry and the namespace where clusters are registered
(kube-multicluster-public
):
oc apply --validate=false -f vendor/k8s.io/cluster-registry/cluster-registry-crd.yaml
The above created:
- The federation CRDs
- A StatefulSet that deploys the federation controller, and a Service for it.
Now deploy the CRDs that determine which Kubernetes resources are federated across the clusters:
for filename in ./config/enabletypedirectives/*.yaml
do
kubefed2 enable -f "${filename}" --federation-namespace=federation-system
done
After a short while the federation controller manager pod is running:
oc get pod -n federation-system
NAME READY STATUS RESTARTS AGE
federation-controller-manager-69cd6d487f-rlmch 1/1 Running 0 31s
Verify that there are no clusters in the registry yet (but note that you can already reference the CRDs for federated clusters):
oc get federatedclusters -n federation-system
oc get clusters --all-namespaces
No resources found.
Now use the kubefed2
tool to register (join) the two clusters:
kubefed2 join cluster1 \
--host-cluster-context cluster1 \
--cluster-context cluster1 \
--add-to-registry \
--v=2 \
--federation-namespace=federation-system
kubefed2 join cluster2 \
--host-cluster-context cluster1 \
--cluster-context cluster2 \
--add-to-registry \
--v=2 \
--federation-namespace=federation-system
Note that the names of the clusters (cluster1
and cluster2
) in the commands above are a refence to the contexts configured in the oc
client. For this to work as expected you need to make sure that the client contexts have been properly configured with the right access levels and context names. The --cluster-context
option for kubefed2 join
can be used to override the refernce to the client context configuration. When the option is not present, kubefed2
uses the cluster name to identify the client context.
Verify that the federated clusters are registered and in a ready state (this can take a moment):
oc describe federatedclusters -n federation-system
Name: cluster1
Namespace: federation-system
Labels: <none>
Annotations: <none>
API Version: core.federation.k8s.io/v1alpha1
Kind: FederatedCluster
Metadata:
Creation Timestamp: 2019-03-21T14:23:21Z
Generation: 1
Resource Version: 31572
Self Link: /apis/core.federation.k8s.io/v1alpha1/namespaces/federation-system/federatedclusters/cluster1
UID: e173a643-4be4-11e9-a67c-525400a4ac7a
Spec:
Cluster Ref:
Name: cluster1
Secret Ref:
Name: cluster1-thlc6
Status:
Conditions:
Last Probe Time: 2019-03-21T14:23:57Z
Last Transition Time: 2019-03-21T14:23:57Z
Message: /healthz responded with ok
Reason: ClusterReady
Status: True
Type: Ready
Events: <none>
Name: cluster2
Namespace: federation-system
Labels: <none>
Annotations: <none>
API Version: core.federation.k8s.io/v1alpha1
Kind: FederatedCluster
Metadata:
Creation Timestamp: 2019-03-21T14:23:26Z
Generation: 1
Resource Version: 31576
Self Link: /apis/core.federation.k8s.io/v1alpha1/namespaces/federation-system/federatedclusters/cluster2
UID: e413c004-4be4-11e9-a67c-525400a4ac7a
Spec:
Cluster Ref:
Name: cluster2
Secret Ref:
Name: cluster2-rsngj
Status:
Conditions:
Last Probe Time: 2019-03-21T14:23:57Z
Last Transition Time: 2019-03-21T14:23:57Z
Message: /healthz responded with ok
Reason: ClusterReady
Status: True
Type: Ready
Events: <none>
Now that we have federation installed, let’s deploy an example app in both clusters through the federation control plane.
Create a test project (test-namespace
) and add a federated placement policy
for it:
cat << EOF | oc create -f -
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Namespace
metadata:
name: test-namespace
- apiVersion: types.federation.k8s.io/v1alpha1
kind: FederatedNamespace
metadata:
name: test-namespace
namespace: test-namespace
spec:
placement:
clusterNames:
- cluster1
- cluster2
EOF
Verify that the namespace is present in both clusters now:
oc --context=cluster1 get ns | grep test
oc --context=cluster2 get ns | grep test
test-namespace Active 7s
test-namespace Active 7s
The container image we will use for our example application (nginx) requires the ability to choose its user id. Configure the clusters to grant that privilege:
for c in cluster1 cluster2; do
oc --context ${c} \
adm policy add-scc-to-user anyuid \
system:serviceaccount:test-namespace:default
done
The sample application includes the following resources:
- A Deployment of an nginx web server.
- A Service of type NodePort for nginx.
- A sample ConfigMap, Secret and ServiceAccount. These are not actually used by the sample application (static nginx) but are included to illustrate how federation would assist with more complex applications.
The sample-app directory contains definitions to deploy these resources. For
each of them there is a resource template and a placement policy, and some of
them also have overrides. For example: the sample nginx deployment template
specifies 3 replicas, but there is also an override that sets the replicas to 5
on cluster2
.
Instantiate all these federated resources:
cd ../
oc apply -R -f sample-app
Verify that the various resources have been deployed in both clusters according to their respective placement policies and cluster overrides:
for resource in configmaps secrets deployments services; do
for cluster in cluster1 cluster2; do
echo ------------ ${cluster} ${resource} ------------
oc --context=${cluster} -n test-namespace get ${resource}
done
done
Verify that the application can be accessed:
host=$(oc whoami --show-server | sed -e 's#https://##' -e 's/:8443//')
port=$(oc get svc -n test-namespace test-service -o jsonpath={.spec.ports[0].nodePort})
curl -I $host:$port
Now modify the test namespace placement policy to remove cluster2
, leaving it
only active on cluster1
:
oc -n test-namespace patch federatednamespace test-namespace \
--type=merge -p '{"spec":{"placement":{"clusterNames": ["cluster1"]}}}'
Observe how the federated resources are now only present in cluster1
:
for resource in configmaps secrets deployments services; do
for cluster in cluster1 cluster2; do
echo ------------ ${cluster} ${resource} ------------
oc --context=${cluster} -n test-namespace get ${resource}
done
done
Now add cluster2
back to the federated namespace placement:
oc -n test-namespace patch federatednamespace test-namespace \
--type=merge -p '{"spec":{"placement":{"clusterNames": ["cluster1", "cluster2"]}}}'
And verify that the federated resources were deployed on both clusters again:
for resource in configmaps secrets deployments services; do
for cluster in cluster1 cluster2; do
echo ------------ ${cluster} ${resource} ------------
oc --context=${cluster} -n test-namespace get ${resource}
done
done
To clean up only the test application run:
oc delete ns test-namespace
This leaves the two clusters with federation deployed. If you want to remove everything run:
for cluster in cluster1 cluster2; do
oc config delete-context ${cluster}-developer
oc config delete-context ${cluster}
minishift profile delete ${cluster}
done
Note that the oc login
commands that were used to switch to the system:admin
account might
have created additional entries in your oc
client configuration (~/.kube/config
).
This walkthrough does not go into detail of the components and resources involved in cluster federation. Feel free to explore the repository to review the YAML files that configure Federation and deploy the sample application. See also the upstream federation-v2 repo and its user guide, on which this guide is based.
Beyond that: minishift provides us with a quick and easy environment for testing, but it has limitations. More advanced aspects of cluster federation like managing ingress traffic or storage rely on supporting infrastructure for the clusters that is not available in minishift. These will be topics for more advanced guides.
One issue that has come up while working with this demo is a log entry generated once per minute per cluster over the lack of zone or region labels on the minishift nodes. The error is harmless, but may interfere with finding real issues in the federation-controller-manager logs. An example follows:
W0321 15:51:31.208448 1 controller.go:216] Failed to get zones and region for cluster cluster1: Zone name for node localhost not found. No label with key failure-domain.beta.kubernetes.io/zone
W0321 15:51:31.298093 1 controller.go:216] Failed to get zones and region for cluster cluster2: Zone name for node localhost not found. No label with key failure-domain.beta.kubernetes.io/zone
The work-around would be to go ahead and label the minishift nodes with some zone and region data, e.g.
oc --context=cluster1 label node localhost failure-domain.beta.kubernetes.io/region=minishift
oc --context=cluster2 label node localhost failure-domain.beta.kubernetes.io/region=minishift
oc --context=cluster1 label node localhost failure-domain.beta.kubernetes.io/zone=east
oc --context=cluster2 label node localhost failure-domain.beta.kubernetes.io/zone=west