azure-cloud-controller-manager
is a Kubernetes component that provides interoperability with Azure API, and will be used by Kubernetes clusters running on Azure. It runs together with other components to provide the Kubernetes cluster’s control plane.
Using cloud-controller-manager is a new alpha feature for Kubernetes since v1.6. cloud-controller-manager
runs cloud provider related controller loops, which used to be run by controller-manager
.
azure-cloud-controller-manager
is a specialization of cloud-controller-manager
. It depends on cloud-controller-manager app and azure cloud provider.
To use cloud controller manager, the following components need to be configured:
-
kubelet
Flag Value Remark --cloud-provider external cloud-provider should be set external --azure-container-registry-config /etc/kubernetes/azure.json Used for Azure credential provider -
kube-controller-manager
Flag Value Remark --cloud-provider external cloud-provider should be set external --external-cloud-volume-plugin azure Optional* *
Since cloud controller manager does not support volume controllers, it will not provide volume capabilities compared to using previous built-in cloud provider case. You can add this flag to turn on volume controller for in-tree cloud providers. This option is likely to be removed with in-tree cloud providers in future. -
kube-apiserver
Do not set flag
--cloud-provider
-
azure-cloud-controller-manager
Set following flags:
Flag Value Remark --cloud-provider azure cloud-provider should be set azure --cloud-config /etc/kubernetes/azure.json Path for cloud provider config --kubeconfig /etc/kubernetes/kubeconfig Path for cluster kubeconfig For other flags such as
--allocate-node-cidrs
,--configure-cloud-routes
,--cluster-cidr
, they are moved from kube-controller-manager. If you are migrating from kube-controller-manager, they should be set to same value.For details of those flags, please refer to this doc.
Alternatively, you can use aks-engine to deploy a Kubernetes cluster running with cloud-controller-manager. It supports deploying Kubernetes azure-cloud-controller-manager
for Kubernetes v1.8+.
AzureDisk and AzureFile volume plugins are not supported with external coud provider (See kubernetes/kubernetes#71018 for explanations).
Hence, azuredisk-csi-driver and azurefile-csi-driver should be used for persistent volumes.
Run following commands:
# run deploy.
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/crd-csi-driver-registry.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/crd-csi-node-info.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/rbac-csi-azuredisk-controller.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/rbac-csi-driver-registrar.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/csi-azuredisk-provisioner.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/csi-azuredisk-controller.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/csi-azuredisk-node.yaml
# create storage class.
https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/storageclass-azuredisk-csi.yaml
See azuredisk-csi-driver for more details.
Run following commands:
# run deploy.
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/crd-csi-driver-registry.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/crd-csi-node-info.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/csi-azurefile-controller.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/csi-azurefile-node.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/rbac-csi-azurefile-controller.yaml
# create storage class.
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/storageclass-azurefile-csi.yaml
See azurefile-csi-driver for more details.
Follow the steps bellow if you want change the current default storage class to AzureDisk CSI driver.
First, delete the default storage class:
kubectl delete storageclass default
Then create a new storage class named default
:
cat <<EOF | kubectl apply -f-
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
name: default
provisioner: disk.csi.azure.com
parameters:
skuname: Standard_LRS # available values: Standard_LRS, Premium_LRS, StandardSSD_LRS and UltraSSD_LRS
kind: managed # value "dedicated", "shared" are deprecated since it's using unmanaged disk
cachingMode: ReadOnly
reclaimPolicy: Delete
volumeBindingMode: Immediate
EOF
Build project:
make
Build image:
IMAGE_REGISTRY=<registry> make image
Run unit tests:
make test-unit
Updating dependency: (please check Dependency management for additional information)
make update
Because CSI is not ready on Windows, AzureDisk/AzureFile CSI drivers don't support Windows either. If you have Windows nodes in the cluster, please use kube-controller-manager instead of cloud-controller-manager.