-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openstack-scs-1-29-v1 / openstack-scs-1-28-v2 not deployable (cilium issues) #143
openstack-scs-1-29-v1 / openstack-scs-1-28-v2 not deployable (cilium issues) #143
Comments
This could be the reason (
|
Seems that The helm chart should normally check the Kubernetes version using |
These should be all relevant parts of the helm chart with checks for Kubernetes |
CSO does |
Yes it does work for |
By the way: It seems that older Kubernetes 1.28/1.29 openstack-scs releases do not work as well because of a missing security group „0“ according to cspo. But I guess as soon as the new versions work the old ones are obsolete anyway. |
AFAIK CSPO only cares about node images. What do you mean by security group „0“? |
Hi @Nils98Ar, I just tested the creation of the cluster using the main branch of the cluster-stacks repo, built it via csctl, and did not encounter your error. The Kubernetes version is 1.28.11.
|
@michal-gubricky, what is the state of the ClusterAddon object? |
Here are all pods in kube-system namespace and also state of the cluster-addon resource:
|
AS @Nils98Ar wrote, the breaking change was introduced in the cilium chart version 1.15.5. The main branch installs version 1.15.2, that's why it works for you @michal-gubricky. I checked |
Yeah, I was just looking at the version in |
Hi @janiskemper, can you please take a look? IMO we have three options here:
|
/kind bug
What steps did you take and what happened:
Create an
openstack-scs-1-29-v1
oropenstack-scs-1-28-v2
cluster.The cluster deployment stucks at 3/3 worker nodes and 1/3 control plane node. All nodes stuck in the status
NotReady
.The nodes do not get an internal IP:
Different pods have the following line in their logs:
One of the first errors in the nodes
/var/log/syslog
is:The directory
/etc/cni/net.d
is empty on the nodes.What did you expect to happen:
The cluster is created successfully and usable.
The text was updated successfully, but these errors were encountered: