Failed to create single node cluster per getting started instructions #5858
Replies: 8 comments 5 replies
-
The error suggests that the pod is unable to use the storage. Your storage might require you to configure some specific fsGroup or uid in the security context. You can do that in the Kafka custom resource: https://strimzi.io/docs/operators/latest/full/using.html#type-PodTemplate-reference. But what are the right values depends on your environment. |
Beta Was this translation helpful? Give feedback.
-
Hi.
Thanks for the fast reply.
I’ve not done anything special in setting up my storage … except of course to create my machines using virtualbox
These are vanilla fedora VMs
I’ve already been successful at getting nginx up and running in my k8s cluster
I didn’t do anything special with storage.
How might I isolate the specific issue? What experiments would tell me what the "environment issue" actually is?
…Sent from my iPhone
On Nov 8, 2021, at 1:38 PM, Jakub Scholz ***@***.***> wrote:
The error suggests that the pod is unable to use the storage. Your storage might require you to configure some specific fsGroup or uid in the security context. You can do that in the Kafka custom resource: https://strimzi.io/docs/operators/latest/full/using.html#type-PodTemplate-reference. But what are the right values depends on your environment.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Beta Was this translation helpful? Give feedback.
-
Thanks Jakub
Pretty sure I used local storage
Since this is dev and ephemeral works for now
Awesome response .. I’ll check this morning and follow up
…Sent from my iPhone
On Nov 9, 2021, minuteat 4:44 AM, Jakub Scholz ***@***.***> wrote:
Well, Kubernetes does not offer persistent storage out of the box. So you had to install something to deal with the storage provisioning etc. So that is probably what you need to look at to see what it requires etc.
Another way might to find out what the permissions are on the volume which was created => and one you know that, you might be able to set the UID / fsGroup to the right values.
Setting the UID to 0 (root) might also help you:
securityContext:
runAsUser: 0
But it is obviously less secure.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Beta Was this translation helpful? Give feedback.
-
Btw typically when I set up an app to run on K8s I do take an explicit action to configure storage
I didn’t do that with strimziio only because I was following the getting started instructions and I must have missed the bit on storage setup guidance
I’m actually good with running as user 0 for now
I’ll try to add that and see what happens
I’m looking for the truly fastest start for this small dev cluster
…Sent from my iPhone
On Nov 9, 2021, at 4:44 AM, Jakub Scholz ***@***.***> wrote:
Well, Kubernetes does not offer persistent storage out of the box. So you had to install something to deal with the storage provisioning etc. So that is probably what you need to look at to see what it requires etc.
Another way might to find out what the permissions are on the volume which was created => and one you know that, you might be able to set the UID / fsGroup to the right values.
Setting the UID to 0 (root) might also help you:
securityContext:
runAsUser: 0
But it is obviously less secure.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Beta Was this translation helpful? Give feedback.
-
So this is the yaml I used. I see its reference to the persistent-claim. I assumed that the operator would create the PV and the associated PVC with default values. To get started I'm pretty sure that all I want is an ephemeral local PV.. along with running as root. I only need a single broker kafka server.
|
Beta Was this translation helpful? Give feedback.
-
Ok.. I am recalling more about how I set up my "vanilla cluster". Its been a couple months, and I'm drawing on a 61 year old memory. ;) I used Rook/NFS to set up my storage for nginx. (I'd have used rook ceph, but didnt have the spare drives). I've got an auto-provisioner in place, so as long as I can point you to the rook-nfs-share1 storage class then the PV should get autoprovisioned.
|
Beta Was this translation helpful? Give feedback.
-
If need be.. I'll get rook / ceph up and running. But I'm just trying to get kafka in its simplest, insecure, unreliable, form running. :) To your point, I do see some verbiage on the internet. |
Beta Was this translation helpful? Give feedback.
-
Thanks Jakub. Your feedback and responsiveness is much appreciated. For now I've been running kafka on a "bare metal" (VM). For now I'll likely live with that. I think, based on your feedback, what might make sense is for me to wait till I upgrade my hardware (right now my entire cluster runs on a single laptop with 16 GB RAM and 32 vCPU), and get a k8s cluster with rook/ceph.. before I start running kafka on my dev k8s cluster. |
Beta Was this translation helpful? Give feedback.
-
I'm following these instructions:
When I ran the command to wait, I found that it timed out (as the instructions said it might). However I also found that the following pod was in crash loop backoff.
I've got a vanilla kubernetes cluster at version 1.21.
Beta Was this translation helpful? Give feedback.
All reactions