Table of Contents
- Introduction
- Pre-requisites
- Federation deployment
- Example application
- Clean up
- What’s next?
- Known Issues
Note //This document is undergoing revisions with the v0.1.0 release of KubeFed which takes the Federation Operator and KubeFed project into Beta status.//
This demo is a simple deployment of Federation Operator on two OpenShift clusters. A sample application is deployed to both clusters through the Federation controller.
Federation requires an OpenShift 3.11 cluster and works on both OKD and OpenShift Container Platform (OCP).
This walkthrough will use 2 all-in-one OKD clusters deployed using minishift.
Follow the getting started guide for minishift in the OKD documentation to get minishift installed.
Note: the steps below will create a few entries in the kubectl
/ oc
client
configuration file (~/.kube/config
). If you have an existing client
configuration file that you want to preserve unmodified it is advisable to make
a backup copy before starting.
Your system should have minishift configured and ready to use with your
preferred VM driver and the oc
client to interact with them. You can use the
oc
client bundled with minishift.
The steps in this walkthrough were tested with:
minishift version
minishift v1.33.0+ba29431
The kubefedctl
tool manages federated cluster registration. Download the
v0.0.10 release and unpack it into a directory in your PATH (the
example uses $HOME/bin
):
curl -LOs https://github.com/kubernetes-sigs/kubefed/releases/download/v0.0.10/kubefedctl.tgz
tar xzf kubefedctl.tgz -C ~/bin
rm -f kubefedctl.tgz
Verify that kubefedctl
is working:
kubefedctl version
kubefedctl version: version.Info{Version:"v0.0.10-dirty", GitCommit:"71d233ede685707df554ef653e06bf7f0229415c", GitTreeState:"dirty", BuildDate:"2019-05-06T22:30:31Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Clone the demo code to your local machine:
git clone https://github.com/openshift/federation-dev.git
cd federation-dev/
Start two minishift clusters with OKD version 3.11 called cluster1
and
cluster2
. Note that these cluster names are referenced throughout the
walkthrough, so it's recommended that you adhere to them:
minishift start --profile cluster1
minishift start --profile cluster2
Each minishift invocation will generate output as it progresses and will conclude with instructions on how to access each cluster using a browser or the command line:
-- Starting profile 'cluster1'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
[output truncated]
OpenShift server started.
The server is accessible via web console at:
https://192.168.42.184:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin
In order to use the oc
client bundled with minishift, run this to add it to
your $PATH
:
eval $(minishift oc-env)
We need cluster administrator privileges, so switch the
oc
client contexts to use the system:admin
account instead of the default
unprivileged developer
user:
oc config use-context cluster2
oc login -u system:admin
oc config rename-context cluster2 cluster2-developer
oc config rename-context $(oc config current-context) cluster2
And the same for cluster1
:
oc config use-context cluster1
oc login -u system:admin
oc config rename-context cluster1 cluster1-developer
oc config rename-context $(oc config current-context) cluster1
After this our current client context is for system:admin
in cluster1
. The
following commands assume this is the active context:
oc config current-context
oc whoami
cluster1
system:admin
The presence and naming of the client contexts is important because the kubefedctl
tool uses them to manage cluster registration, and they are referenced by context name.
Federation target clusters do not require federation to be installed on them at
all, but for convenience we will use one of the clusters (cluster1
) to host
the federation control plane.
At the moment, the Federation Operator only works in namespace-scoped mode, in the future cluster-scoped mode will be supported as well by the operator.
In order to deploy the operator, we are going to use OLM
, so we will need to deploy OLM
before deploying the federation operator.
oc create -f olm/01-olm.yaml
oc create -f olm/02-olm.yaml
Now that we have the OLM
deployed, it is time to deploy the federation operator. We will create a new namespace test-namespace
where the kubefed controller will be deployed.
Wait until all pods in namespace olm
are running:
oc get pods -n olm
NAME READY STATUS RESTARTS AGE
catalog-operator-bfc6fd7bc-xdwbs 1/1 Running 0 3m
olm-operator-787885c577-wmzxp 1/1 Running 0 3m
olm-operators-gmnk4 1/1 Running 0 3m
operatorhubio-catalog-gng4x 1/1 Running 0 3m
packageserver-7fc659d9cb-5qbw9 1/1 Running 0 2m
packageserver-7fc659d9cb-tl9gv 1/1 Running 0 2m
Then, create the kubefed subscription.
oc create -f olm/kubefed.yaml
After a short while the kubefed controller manager pod is running:
NOTE: It can take up to a minute for the pod to appear
oc get pod -n test-namespace
NAME READY STATUS RESTARTS AGE
federation-controller-manager-6bcf6c695f-bx7kd 1/1 Running 0 1m
Now we are going to enable some of the federated types needed for our demo application
for type in namespaces secrets serviceaccounts services configmaps deployments.apps
do
kubefedctl enable $type --federation-namespace test-namespace
done
Verify that there are no clusters yet (but note that you can already reference the CRDs for federated clusters):
oc get federatedclusters -n test-namespace
No resources found.
Now use the kubefedctl
tool to register (join) the two clusters:
kubefedctl join cluster1 \
--host-cluster-context cluster1 \
--cluster-context cluster1 \
--add-to-registry \
--v=2 \
--federation-namespace=test-namespace
kubefedctl join cluster2 \
--host-cluster-context cluster1 \
--cluster-context cluster2 \
--add-to-registry \
--v=2 \
--federation-namespace=test-namespace
Note that the names of the clusters (cluster1
and cluster2
) in the commands above are a reference to the contexts configured in the oc
client. For this to work as expected you need to make sure that the client contexts have been properly configured with the right access levels and context names. The --cluster-context
option for kubefedctl join
can be used to override the reference to the client context configuration. When the option is not present, kubefedctl
uses the cluster name to identify the client context.
Verify that the federated clusters are registered and in a ready state (this can take a moment):
oc describe federatedclusters -n test-namespace
Name: cluster1
Namespace: test-namespace
Labels: <none>
Annotations: <none>
API Version: core.federation.k8s.io/v1alpha1
Kind: FederatedCluster
Metadata:
Creation Timestamp: 2019-05-15T15:43:03Z
Generation: 1
Resource Version: 7513
Self Link: /apis/core.federation.k8s.io/v1alpha1/namespaces/test-namespace/federatedclusters/cluster1
UID: 205ee241-7728-11e9-9ec5-525400940741
Spec:
Cluster Ref:
Name: cluster1
Secret Ref:
Name: cluster1-t8dt6
Status:
Conditions:
Last Probe Time: 2019-05-15T15:46:54Z
Last Transition Time: 2019-05-15T15:43:13Z
Message: /healthz responded with ok
Reason: ClusterReady
Status: True
Type: Ready
Events: <none>
Name: cluster2
Namespace: test-namespace
Labels: <none>
Annotations: <none>
API Version: core.federation.k8s.io/v1alpha1
Kind: FederatedCluster
Metadata:
Creation Timestamp: 2019-05-15T15:43:08Z
Generation: 1
Resource Version: 7512
Self Link: /apis/core.federation.k8s.io/v1alpha1/namespaces/test-namespace/federatedclusters/cluster2
UID: 237afb85-7728-11e9-9ec5-525400940741
Spec:
Cluster Ref:
Name: cluster2
Secret Ref:
Name: cluster2-pp4tk
Status:
Conditions:
Last Probe Time: 2019-05-15T15:46:54Z
Last Transition Time: 2019-05-15T15:43:13Z
Message: /healthz responded with ok
Reason: ClusterReady
Status: True
Type: Ready
Events: <none>
Now that we have federation installed, let’s deploy an example app in both clusters through the federation control plane.
Verify that the namespace is present in both clusters now:
oc --context=cluster1 get ns | grep test-namespace
oc --context=cluster2 get ns | grep test-namespace
test-namespace Active 8m
test-namespace Active 5m
The container image we will use for our example application (nginx) requires the ability to choose its user id. Configure the clusters to grant that privilege:
for c in cluster1 cluster2; do
oc --context ${c} \
adm policy add-scc-to-user anyuid \
system:serviceaccount:test-namespace:default
done
The sample application includes the following resources:
- A Deployment of an nginx web server.
- A Service of type NodePort for nginx.
- A sample ConfigMap, Secret and ServiceAccount. These are not actually used by the sample application (static nginx) but are included to illustrate how federation would assist with more complex applications.
The sample-app directory contains definitions to deploy these resources. For
each of them there is a resource template and a placement policy, and some of
them also have overrides. For example: the sample nginx deployment template
specifies 3 replicas, but there is also an override that sets the replicas to 5
on cluster2
.
Instantiate all these federated resources:
oc apply -R -f sample-app
Verify that the various resources have been deployed in both clusters according to their respective placement policies and cluster overrides:
for resource in configmaps secrets deployments services; do
for cluster in cluster1 cluster2; do
echo ------------ ${cluster} ${resource} ------------
oc --context=${cluster} -n test-namespace get ${resource}
done
done
Verify that the application can be accessed:
for cluster in cluster1 cluster2; do
echo ------------ ${cluster} ------------
host=$(oc --context $cluster whoami --show-server | sed -e 's#https://##' -e 's/:8443//')
port=$(oc --context $cluster get svc -n test-namespace test-service -o jsonpath={.spec.ports[0].nodePort})
curl -I $host:$port
done
Now modify the test-deployment
federated deployment placement policy to remove cluster2
, leaving it
only active on cluster1
:
oc -n test-namespace patch federateddeployment test-deployment \
--type=merge -p '{"spec":{"placement":{"clusterNames": ["cluster1"]}}}'
Observe how the federated deployment is now only present in cluster1
:
for cluster in cluster1 cluster2; do
echo ------------ ${cluster} deployments ------------
oc --context=${cluster} -n test-namespace get deployments
done
Now add cluster2
back to the federated deployment placement:
oc -n test-namespace patch federateddeployment test-deployment \
--type=merge -p '{"spec":{"placement":{"clusterNames": ["cluster1", "cluster2"]}}}'
And verify that the federated deployment was deployed on both clusters again:
for cluster in cluster1 cluster2; do
echo ------------ ${cluster} deployments ------------
oc --context=${cluster} -n test-namespace get deployments
done
To clean up only the test application run:
oc delete -R -f sample-app
This leaves the two clusters with federation deployed. If you want to remove everything run:
for cluster in cluster1 cluster2; do
oc config delete-context ${cluster}-developer
oc config delete-context ${cluster}
minishift profile delete ${cluster}
done
Note that the oc login
commands that were used to switch to the system:admin
account might
have created additional entries in your oc
client configuration (~/.kube/config
).
This walkthrough does not go into detail of the components and resources involved in cluster federation. Feel free to explore the repository to review the YAML files that configure Federation and deploy the sample application. See also the upstream kubefed repository and its [user guide](e RE ), on which this guide is based.
Beyond that: minishift provides us with a quick and easy environment for testing, but it has limitations. More advanced aspects of cluster federation like managing ingress traffic or storage rely on supporting infrastructure for the clusters that is not available in minishift. These will be topics for more advanced guides.
One issue that has come up while working with this demo is a log entry generated once per minute per cluster over the lack of zone or region labels on the minishift nodes. The error is harmless, but may interfere with finding real issues in the federation-controller-manager logs. An example follows:
W0321 15:51:31.208448 1 controller.go:216] Failed to get zones and region for cluster cluster1: Zone name for node localhost not found. No label with key failure-domain.beta.kubernetes.io/zone
W0321 15:51:31.298093 1 controller.go:216] Failed to get zones and region for cluster cluster2: Zone name for node localhost not found. No label with key failure-domain.beta.kubernetes.io/zone
The work-around would be to go ahead and label the minishift nodes with some zone and region data, e.g.
oc --context=cluster1 label node localhost failure-domain.beta.kubernetes.io/region=minishift
oc --context=cluster2 label node localhost failure-domain.beta.kubernetes.io/region=minishift
oc --context=cluster1 label node localhost failure-domain.beta.kubernetes.io/zone=east
oc --context=cluster2 label node localhost failure-domain.beta.kubernetes.io/zone=west