MinIO is a Kubernetes-native high performance object store with an S3-compatible API. The MinIO Kubernetes Operator supports deploying MinIO Tenants onto private and public cloud infrastructures ("Hybrid" Cloud).
This README provides a high level description of the MinIO Operator and quickstart instructions. See https://min.io/docs/minio/kubernetes/upstream/index.html for complete documentation on the MinIO Operator.
Each MinIO Tenant represents an independent MinIO Object Store within the Kubernetes cluster. The following diagram describes the architecture of a MinIO Tenant deployed into Kubernetes:
MinIO provides multiple methods for accessing and managing the MinIO Tenant:
The MinIO Console provides a graphical user interface (GUI) for interacting with MinIO Tenants. The MinIO Operator installs and configures the Console for each tenant by default.
Administrators of MinIO Tenants can perform a variety of tasks through the Console, including user creation, policy configuration, and bucket replication. The Console also provides a high level view of Tenant health, usage, and healing status.
For more complete documentation on using the MinIO Console, see the MinIO Console Github Repository.
The MinIO Operator extends the Kubernetes API to support deploying MinIO-specific resources as a Tenant in a Kubernetes cluster.
The MinIO kubectl minio
plugin wraps the Operator to provide a simplified interface
for deploying and managing MinIO Tenants in a Kubernetes cluster through the
kubectl
command line tool.
This procedure installs the MinIO Operator and creates a 4-node MinIO Tenant for supporting object storage operations in a Kubernetes cluster.
Starting with Operator v5.0.0, MinIO requires Kubernetes version 1.21.0 or later. You must upgrade your Kubernetes cluster to 1.21.0 or later to use Operator v5.0.0+.
Starting with Operator v4.0.0, MinIO requires Kubernetes version 1.19.0 or later. Previous versions of the Operator supported Kubernetes 1.17.0 or later. You must upgrade your Kubernetes cluster to 1.19.0 or later to use Operator v4.0.0+.
This procedure assumes the host machine has kubectl
installed and configured
with access to the target Kubernetes cluster.
MinIO supports no more than one MinIO Tenant per Namespace. The following kubectl
command creates a new namespace
for the MinIO Tenant.
kubectl create namespace minio-tenant-1
The MinIO Operator Console supports creating a namespace as part of the Tenant Creation procedure.
The MinIO Kubernetes Operator automatically generates Persistent Volume Claims (PVC
) as part of deploying a MinIO
Tenant.
The plugin defaults to creating each PVC
with the default
Kubernetes Storage Class
. If the default
storage
class cannot support the generated PVC
, the tenant may fail to deploy.
MinIO Tenants require that the StorageClass
sets volumeBindingMode
to WaitForFirstConsumer
. The
default StorageClass
may use the Immediate
setting, which can cause complications during PVC
binding. MinIO
strongly recommends creating a custom StorageClass
for use by PV
supporting a MinIO Tenant.
The following StorageClass
object contains the appropriate fields for supporting a MinIO Tenant using
MinIO DirectPV-managed drives:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: directpv-min-io
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
The MinIO Operator generates one Persistent Volume Claim (PVC) for each volume in the tenant plus two PVC to support collecting Tenant Metrics and logs. The cluster must have sufficient Persistent Volumes that meet the capacity requirements of each PVC for the tenant to start correctly. For example, deploying a Tenant with 16 volumes requires 18 (16 + 2). If each PVC requests 1TB capacity, then each PV must also provide at least 1TB of capacity.
MinIO recommends using the MinIO DirectPV Driver to automatically provision Persistent Volumes from locally attached drives. This procedure assumes MinIO DirectPV is installed and configured.
For clusters which cannot deploy MinIO DirectPV, use Local Persistent Volumes. The following example YAML describes a local persistent volume:
The following YAML describes a local
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: <PV-NAME>
spec:
capacity:
storage: 1Ti
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: </mnt/disks/ssd1>
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <NODE-NAME>
Replace values in brackets <VALUE>
with the appropriate value for the local drive.
You can estimate the number of PVC by multiplying the number of minio
server pods in the Tenant by the number of
drives per node. For example, a 4-node Tenant with 4 drives per node requires 16 PVC and therefore 16 PV.
MinIO strongly recommends using the following CSI drivers for creating local PV to ensure best object storage performance:
Follow the Install kustomize
guide for your host system before starting this procedure.
VERSION=v5.0.11
TIMEOUT=120 # By default is 27, sometimes connection is slow, allow more time to fetch it.
kustomize build "github.com/minio/operator/resources/?timeout=120&ref=${VERSION}" > operator.yaml
kubectl apply -f operator.yaml
Run the following command to verify the status of the Operator:
kubectl get pods -n minio-operator
The output resembles the following:
NAME READY STATUS RESTARTS AGE
console-6b6cf8946c-9cj25 1/1 Running 0 99s
minio-operator-69fd675557-lsrqg 1/1 Running 0 99s
The console-*
pod runs the MinIO Operator Console, a graphical user
interface for creating and managing MinIO Tenants.
The minio-operator-*
pod runs the MinIO Operator itself.
Get the token:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: console-sa-secret
namespace: minio-operator
annotations:
kubernetes.io/service-account.name: console-sa
type: kubernetes.io/service-account-token
EOF
SA_TOKEN=$(kubectl -n minio-operator get secret console-sa-secret -o jsonpath="{.data.token}" | base64 --decode)
echo $SA_TOKEN
Change the console service to use NodePort:
spec:
ports:
- name: http
protocol: TCP
port: 9090
targetPort: 9090
nodePort: 30080 <--------------- Using this port in the node
- name: https
protocol: TCP
port: 9443
targetPort: 9443
nodePort: 30869
selector:
app: console
clusterIP: 10.96.69.150
clusterIPs:
- 10.96.69.150
type: NodePort <-------------------- Using NodePort
Open your browser to the provided address and use the JWT token to log in to the Operator Console.
Click + Create Tenant to open the Tenant Creation workflow.
The Operator Console Create New Tenant walkthrough builds out a MinIO Tenant. The following list describes the basic configuration sections.
-
Name - Specify the Name, Namespace, and Storage Class for the new Tenant.
The Storage Class must correspond to a Storage Class that corresponds to Local Persistent Volumes that can support the MinIO Tenant.
The Namespace must correspond to an existing Namespace that does not contain any other MinIO Tenant.
Enable Advanced Mode to access additional advanced configuration options.
-
Tenant Size - Specify the Number of Servers, Number of Drives per Server, and Total Size of the Tenant.
The Resource Allocation section summarizes the Tenant configuration based on the inputs above.
Additional configuration inputs may be visible if Advanced Mode was enabled in the previous step.
-
Preview Configuration - summarizes the details of the new Tenant.
After configuring the Tenant to your requirements, click Create to create the new tenant.
The Operator Console displays credentials for connecting to the MinIO Tenant. You must download and secure these credentials at this stage. You cannot trivially retrieve these credentials later.
You can monitor Tenant creation from the Operator Console.
Use the following command to list the services created by the MinIO Operator:
kubectl get svc -n NAMESPACE
Replace NAMESPACE
with the namespace for the MinIO Tenant. The output
resembles the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
minio LoadBalancer 10.104.10.9 <pending> 443:31834/TCP
myminio-console LoadBalancer 10.104.216.5 <pending> 9443:31425/TCP
myminio-hl ClusterIP None <none> 9000/TCP
myminio-log-hl-svc ClusterIP None <none> 5432/TCP
myminio-log-search-api ClusterIP 10.102.151.239 <none> 8080/TCP
myminio-prometheus-hl-svc ClusterIP None <none> 9090/TCP
Applications internal to the Kubernetes cluster should use the minio
service for performing object storage
operations on the Tenant.
Administrators of the Tenant should use the minio-tenant-1-console
service to access the MinIO Console and manage the
Tenant, such as provisioning users, groups, and policies for the Tenant.
MinIO Tenants deploy with TLS enabled by default, where the MinIO Operator uses the
Kubernetes certificates.k8s.io
API to generate the required x.509 certificates. Each
certificate is signed using the Kubernetes Certificate Authority (CA) configured during
cluster deployment. While Kubernetes mounts this CA on Pods in the cluster, Pods do
not trust that CA by default. You must copy the CA to a directory such that the
update-ca-certificates
utility can find and add it to the system trust store to
enable validation of MinIO TLS certificates:
cp /var/run/secrets/kubernetes.io/serviceaccount/ca.crt /usr/local/share/ca-certificates/
update-ca-certificates
For applications external to the Kubernetes cluster, you must configure
Ingress or a
Load Balancer to
expose the MinIO Tenant services. Alternatively, you can use the kubectl port-forward
command
to temporarily forward traffic from the local host to the MinIO Tenant.
Use of MinIO Operator is governed by the GNU AGPLv3 or later, found in the LICENSE file.