-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WFLY-19359] define kafka yaml file for OpenShift and update the README. #929
Conversation
Hi @kstekovi. Thanks for your PR. I'm waiting for a wildfly member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CI should also be updated to use this file.
I see in the CI code for both there is a 'pause' between the cluster and creating the topic. But I don't know if that is actually needed or not.
For OpenShift, I am pretty sure the break between creating the subscription and the cluster is needed. Unless it handles things differently for some reason when using one file.
The Kubernetes part is not in the README yet, it is part of #919 (@emmartins )
59a223d
to
781a68f
Compare
Thanks for review @kabir Could you please take a look again? The CI is updated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
The changes look good but it doesn't run :-)
If you see the original OpenShift setup test script it used to install the operator, but then it needs to wait for the AMQ Streams operator to start and make the CRDs available. That was done here https://github.com/wildfly/quickstart/pull/929/files#diff-3002e707de1a81994b7bfc2d03e271afd1e186c1d8ee4c8ae7c70992bbddd5aaL62-L65. Later there is a check of the exit value from the oc calls to create the cluster. Once the oc call to create the cluster exits with 0 we can proceed.
As it is now, if I try to apply the file it fails when trying to install the cluster:
% oc apply -f ./charts/kafka-on-openshift.yaml --wait --timeout=10m0s
subscription.operators.coreos.com/amq-streams unchanged
resource mapping not found for name: "my-cluster" namespace: "" from "./charts/kafka-on-openshift.yaml": no matches for kind "Kafka" in version "kafka.strimzi.io/v1beta2"
ensure CRDs are installed first
resource mapping not found for name: "testing" namespace: "" from "./charts/kafka-on-openshift.yaml": no matches for kind "KafkaTopic" in version "kafka.strimzi.io/v1beta2"
ensure CRDs are installed first
So I think this needs to be two yamls. One for the subscription, and one for the cluster/topic. Once I wait a bit after the first time, the whole script applies cleanly
% oc apply -f ./charts/kafka-on-openshift.yaml --wait --timeout=10m0s
subscription.operators.coreos.com/amq-streams unchanged
kafka.kafka.strimzi.io/my-cluster created
kafkatopic.kafka.strimzi.io/testing created
It could possibly be an option to keep it as one yaml, but I think we will need to point out to the user that failures are expected until the operator CRDs are available, and to keep retrying.
Hi, This didn't happen on my cluster. I try also different one and hit this problem. So i split resource definition into two yaml files which are processed separately as you suggest. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @kstekovi A few small changes needed, but apart from that I've verified that both the README instructions and running the tests work against a Clusterbot instance.
So I think once you make the fixes I suggested, this can be merged
@@ -1,67 +1,46 @@ | |||
=== Install AMQ Streams on OpenShift | |||
|
|||
The functionality of this quickstart depends on a running instance of the | |||
https://access.redhat.com/products/red-hat-amq#streams[AMQ Streams] Operator. AMQ Streams is a Red Hat project based on Apache Kafka. To deploy AMQ Streams in the Openshift environment: | |||
https://access.redhat.com/products/red-hat-amq#streams[AMQ Streams] Operator. AMQ Streams is a Red Hat project based | |||
on Apache Kafka. To deploy AMQ Streams in the Openshift environment: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think with Asciidoc, it is better not to add line breaks (you IDE should have the option to wrap text if lines are long)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
line break removed.
---- | ||
|
||
Although the above commands will return pretty immediately, your AMQ Streams instance will not be available until its entity operator is up and running. The name of the pod will be of the format `my-cluster-entity-operator-xxxxxxxxx-yyyyy`. | ||
Although the above commands will return pretty immediately, your AMQ Streams instance will not be available until its | ||
entity operator is up and running. The name of the pod will be of the format `my-cluster-entity-operator-xxxxxxxxx-yyyyy`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the line break
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
line break removed.
https://issues.redhat.com/browse/WFLY-19408
Define AMQ stream operator, instance and topic in a yaml file for OpenShift and update the README.