-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Freshness tracker should fail a cluster iteration if all partitions for all consumers fails #51
Labels
Comments
jyates
added a commit
to jyates/kafka-helmsman
that referenced
this issue
Jun 28, 2022
Across all the partitions for all topics for all consumers for a given cluster, if we succeed at reading the freshness for at least one of the consumers then the cluster freshness succeeds. Without this, if none the consumers could be evaluated successfully then the cluster would still be marked successful, but that often indicated an incorrectly configured cluster. However, if we are able to read even a single partition, then we can reach the cluster. Maybe its a transient so we should be allowed a next round to try to get more successes. Addresses teslamotors#51
jyates
added a commit
to jyates/kafka-helmsman
that referenced
this issue
Jun 28, 2022
Across all the partitions for all topics for all consumers for a given cluster, if we succeed at reading the freshness for at least one of the consumers then the cluster freshness succeeds. Without this, if none the consumers could be evaluated successfully then the cluster would still be marked successful, but that often indicated an incorrectly configured cluster. However, if we are able to read even a single partition, then we can reach the cluster. Maybe its a transient so we should be allowed a next round to try to get more successes. Addresses teslamotors#51
jyates
added a commit
to jyates/kafka-helmsman
that referenced
this issue
Jun 29, 2022
Across all the partitions for all topics for all consumers for a given cluster, if we succeed at reading the freshness for at least one of the consumers then the cluster freshness succeeds. Without this, if none the consumers could be evaluated successfully then the cluster would still be marked successful, but that often indicated an incorrectly configured cluster. However, if we are able to read even a single partition, then we can reach the cluster. Maybe its a transient so we should be allowed a next round to try to get more successes. Addresses teslamotors#51
jyates
added a commit
that referenced
this issue
Jun 29, 2022
Across all the partitions for all topics for all consumers for a given cluster, if we succeed at reading the freshness for at least one of the consumers then the cluster freshness succeeds. Without this, if none the consumers could be evaluated successfully then the cluster would still be marked successful, but that often indicated an incorrectly configured cluster. However, if we are able to read even a single partition, then we can reach the cluster. Maybe its a transient so we should be allowed a next round to try to get more successes. Addresses #51
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Currently, we are very generous with the failure constraints for a cluster, from ConsumerFreshness (ln 281-293):
However, SSL connection issues (i.e. a misconfiguration) only show up when querying the consumers. So you can have a valid burrow lookup for the cluster (b/c burrow is configured correctly) but freshness fails for each consumer because the tracker misconfigured. You would never know though (from the
kafka_consumer_freshness_last_success_run_timestamp
metric) since that will not get incremented for the failures.The text was updated successfully, but these errors were encountered: