-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kibana 8.x monitoring detect duplicate uuids from different instances #146955
Comments
Pinging @elastic/kibana-core (Team:Core) |
Pinging @elastic/infra-monitoring-ui (Team:Infra Monitoring UI) |
Hi @wasserman, to evaluate the options we have to detect this misconfiguration, could you share the monitoring documents produced by these cloned instances if you still have them handy ? |
This sounds very similar to an issue we recently had in SDH where due to some file cloning a bunch of Beats instanced ended up with the same UUIDs as well causing them to overlap. |
@klacabane I didn't have access and I had the user regenerate the uuids for the Kibana instances to resolve it. @miltonhultgren I agree. From my observation the monitoring UI includes the uuid in the url, so I can see how only one will return at a given time. This is actually why I think having some way to detect this would be useful since having the user fix and avoid cloning uuids is reasonable. My ask is just to make it obvious to the user via the monitoring UI (and possibly an associated alert). |
There was an RFC awhile ago that proposed adding a mechanism for detecting incompatible configurations in Kibana deployments, which could address situations like this. But I believe it was put on hold as priorities shifted (cc @azasypkin @rudolf) |
Describe the feature:
If multiple Kibana instances have the same uuid, this is not easy to detect. It would be helpful if the monitoring application could make the user aware of this misconfiguration.
Describe a specific use case for the feature:
I was troubleshooting an environment where Kibana monitoring was only showing one Kibana node at a time, seemingly randomly rotating between Kibana hosts. I learned that the Kibana hosts were built by setting up one node and then using some VM cloning technique to roll out the rest. Of course this is not recommended approach, but it took lots of time to dig deep enough to understand that all the nodes shared the same uuid. It would be handy if Kibana monitoring had some visual way of showing if there are multiple hosts with the same uuid during the filtered time frame, providing some visibility. Maybe a hazard symbol or a Kibana health going yellow stating this is a misconfiguration with a recommendation on how to fix it. All I did was remove the uuid file on each node and restart Kibana.
The text was updated successfully, but these errors were encountered: