-
Notifications
You must be signed in to change notification settings - Fork 455
MinIO and Koperator
Cesar Celis Hernandez edited this page Apr 21, 2022
·
24 revisions
How MinIO
integrates with Koperator
in k8s
Documentation based on:
- https://kafka.apache.org/quickstart
- install-kafka-operator
- https://github.com/banzaicloud/koperator
- https://banzaicloud.com/docs/supertubes/kafka-operator/test/
- https://docs.min.io/docs/minio-bucket-notification-guide.html
- Start cluster, making sure enough space is allowed (minimum 6 vCPU and 10 GB RAM):
minikube stop
minikube delete
minikube start --memory=10240
-
Install Koperator following documentation at: https://banzaicloud.com/docs/supertubes/kafka-operator/install-kafka-operator/
-
Test your Kafka following documentation at: https://banzaicloud.com/docs/supertubes/kafka-operator/test/
-
Install
MinIO Operator
andMinIO Tenant
:
kubectl apply -k ~/operator/resources
kubectl apply -k ~/operator/examples/kustomization/tenant-lite
- Forward
MinIO
port (Wait forMinIO
pods to be ready):
kubectl port-forward storage-lite-pool-0-0 -n tenant-lite 9000
- Add
MinIO
alias:
mc alias set myminio https://localhost:9000 minio minio123 --insecure
You should see:
$ mc alias set myminio https://localhost:9000 minio minio123 --insecure
Added `myminio` successfully.
- Add Kafka endpoint to
MinIO
mc admin config set myminio notify_kafka:1 tls_skip_verify="off" \
queue_dir="" queue_limit="0" sasl="off" sasl_password="" sasl_username="" \
tls_client_auth="0" tls="off" client_tls_cert="" client_tls_key="" \
brokers="172.17.0.11:29093" topic="my-topic" version="" --insecure
You should see:
mc admin config set myminio notify_kafka:1 tls_skip_verify="off" queue_dir="" queue_limit="0" sasl="off" sasl_password="" sasl_username="" tls_client_auth="0" tls="off" client_tls_cert="" client_tls_key="" brokers="172.17.0.11:29093" topic="my-topic" version="" --insecure
Successfully applied new settings.
Please restart your server 'mc admin service restart myminio'.
- Restart
MinIO
:
mc admin service restart myminio --insecure
- Enable Kafka bucket notification using MinIO client
mc mb myminio/images --insecure
mc event add myminio/images arn:minio:sqs::1:kafka --suffix .jpg --insecure
mc event list myminio/images --insecure
You should see:
$ mc mb myminio/images --insecure
Bucket created successfully `myminio/images`.
$ mc event add myminio/images arn:minio:sqs::1:kafka --suffix .jpg --insecure
Successfully added arn:minio:sqs::1:kafka
$ mc event list myminio/images --insecure
arn:minio:sqs::1:kafka s3:ObjectCreated:*,s3:ObjectRemoved:*,s3:ObjectAccessed:* Filter: suffix=".jpg"
- Upload/copy image to folder created
mc cp rose.jpg myminio/images --insecure
- As a result, the
consumer
from step 7 above will print the event:
{
"EventName": "s3:ObjectCreated:Put",
"Key": "images/rose.jpg",
"Records": [
{
"eventVersion": "2.0",
"eventSource": "minio:s3",
"awsRegion": "",
"eventTime": "2022-04-05T01:43:53.018Z",
"eventName": "s3:ObjectCreated:Put",
"userIdentity": {
"principalId": "minio"
},
"requestParameters": {
"principalId": "minio",
"region": "",
"sourceIPAddress": "127.0.0.1"
},
"responseElements": {
"content-length": "0",
"x-amz-request-id": "16E2DCA93D259AE4",
"x-minio-deployment-id": "968b47d7-4857-4cf0-8906-58ce4716e1e6",
"x-minio-origin-endpoint": "https://minio.tenant-lite.svc.cluster.local"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "Config",
"bucket": {
"name": "images",
"ownerIdentity": {
"principalId": "minio"
},
"arn": "arn:aws:s3:::images"
},
"object": {
"key": "rose.jpg",
"size": 165352,
"eTag": "c8e032a8b653aebb6b6141d50b5f3cd3",
"contentType": "image/jpeg",
"userMetadata": {
"content-type": "image/jpeg"
},
"sequencer": "16E2DCA93DFB0DCC"
}
},
"source": {
"host": "127.0.0.1",
"port": "",
"userAgent": "MinIO (darwin; arm64) minio-go/v7.0.23 mc/RELEASE.2022-02-23T03-15-59Z"
}
}
]
}
Please make sure to use x86
architecture and follow steps from RedHat
video below or similar:
https://www.youtube.com/watch?v=fYVioaEx2HY
Kafka Operator comes from:
https://strimzi.io/quickstarts/
Steps:
- Start cluster with CodeReady Containers:
crc delete
crc cleanup
crc setup
crc start
You should see:
INFO Checking if running as non-root
INFO Checking if crc-admin-helper executable is cached
INFO Checking for obsolete admin-helper executable
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
INFO Checking if running emulated on a M1 CPU
INFO Checking if HyperKit is installed
INFO Checking if qcow-tool is installed
INFO Checking if crc-driver-hyperkit is installed
INFO Starting CodeReady Containers VM for OpenShift 4.7.18...
INFO CodeReady Containers instance is running with IP 127.0.0.1
INFO CodeReady Containers VM is running
INFO Check internal and public DNS query...
INFO Check DNS query from host...
INFO Verifying validity of the kubelet certificates...
INFO Starting OpenShift kubelet service
INFO Waiting for kube-apiserver availability... [takes around 2min]
INFO Starting OpenShift cluster... [waiting for the cluster to stabilize]
INFO 2 operators are progressing: authentication, operator-lifecycle-manager-packageserver
INFO 4 operators are progressing: authentication, marketplace, network, operator-lifecycle-manager-packageserver
INFO Operator authentication is progressing
INFO 2 operators are progressing: authentication, network
INFO Operator console is progressing
INFO 2 operators are progressing: kube-scheduler, network
INFO 2 operators are progressing: kube-scheduler, network
INFO Operator kube-scheduler is progressing
INFO Operator kube-scheduler is progressing
INFO All operators are available. Ensuring stability...
INFO Operators are stable (2/3)...
INFO Operators are stable (3/3)...
INFO Adding crc-admin and crc-developer contexts to kubeconfig...
Started the OpenShift cluster.
The server is accessible via web console at:
https://console-openshift-console.apps-crc.testing
Log in as administrator:
Username: kubeadmin
Password: 2yajb-6Kz5j-5ijRu-cSiKd
Log in as user:
Username: developer
Password: developer
Use the 'oc' command line interface:
$ eval $(crc oc-env)
$ oc login -u developer https://api.crc.testing:6443
- Install
cert-manager
:
kubectl create -f https://github.com/jetstack/cert-manager/releases/download/v1.6.2/cert-manager.yaml
- Install
Zookeeper
:
helm repo add pravega https://charts.pravega.io
helm repo update
helm install zookeeper-operator --namespace=zookeeper --create-namespace pravega/zookeeper-operator
kubectl create --namespace zookeeper -f - <<EOF
apiVersion: zookeeper.pravega.io/v1beta1
kind: ZookeeperCluster
metadata:
name: zookeeper
namespace: zookeeper
spec:
replicas: 1
EOF
- Create the
kafka
namespace:
kubectl create namespace kafka
- Install needed CRDs:
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
- Follow the deployment:
kubectl get pod -n kafka --watch
- Follow the operator's logs:
kubectl logs deployment/strimzi-cluster-operator -n kafka -f
- Apply the
Kafka
Cluster CR file
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
- Wait on pods:
kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
- Run producer to send messages:
kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9092 --topic my-topic
You can send messages, one message per line
$ kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9092 --topic my-topic
If you don't see a command prompt, try pressing enter.
>message1
>message2
>message3
>
- Run consumer to read messages:
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
You can receive those messages:
$ kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
If you don't see a command prompt, try pressing enter.
[2022-04-07 23:39:13,028] WARN [Consumer clientId=console-consumer, groupId=console-consumer-6036] Error while fetching metadata with correlation id 2 : {my-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2022-04-07 23:39:13,185] WARN [Consumer clientId=console-consumer, groupId=console-consumer-6036] Error while fetching metadata with correlation id 4 : {my-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2022-04-07 23:39:13,371] WARN [Consumer clientId=console-consumer, groupId=console-consumer-6036] Error while fetching metadata with correlation id 6 : {my-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2022-04-07 23:39:13,679] WARN [Consumer clientId=console-consumer, groupId=console-consumer-6036] Error while fetching metadata with correlation id 8 : {my-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
message1
message2
message3
- Install
MinIO
Operator as noted in Installing from OperatorHub
- Then wait until it is installed:
You should see this when done:
- Installed
Tenant
:
$ oc minio tenant create minio-tenant-1 \
--servers 1 \
--volumes 4 \
--capacity 1Gi \
--namespace minio-tenant-1 \
--storage-class local-storage \
Tenant 'minio-tenant-1' created in 'minio-tenant-1' Namespace
Username: admin
Password: 7ca988c1-2963-4729-9084-4230723fb51b
Note: Copy the credentials to a secure location. MinIO will not display these again.
+-------------+------------------------+----------------+--------------+--------------+
| APPLICATION | SERVICE NAME | NAMESPACE | SERVICE TYPE | SERVICE PORT |
+-------------+------------------------+----------------+--------------+--------------+
| MinIO | minio | minio-tenant-1 | ClusterIP | 443 |
| Console | minio-tenant-1-console | minio-tenant-1 | ClusterIP | 9443 |
+-------------+------------------------+----------------+--------------+--------------+
- Make sure
PVC
are bound and thatPV
is part of your node to avoid anyNodeAffinity
issue: