-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Global HTTP Agent for Kibana to ES communication #99736
Comments
Pinging @elastic/kibana-core (Team:Core) |
I suppose, the migration from the legacy client address this problem? Maybe we should focus on #83910 ? The migration also addresses performance and o11y concerns. |
It might; however, it'd be great to fix this issue before 8.0... To be honest, I've forgotten the details of this. I recall either the legacy or the new client not reusing the same HTTP Agent in all scenarios also. |
We are planning to remove the legacy ES client by v7.16. Let's keep the issue open to investigate the new ES client reusing the same HTTP agent. |
@kobelb can we close the issue as there is no legacy client in the Kibana codebase anymore. |
@mshustov I'm still seeing multiple HTTP Agents being constructed. Whenever we instantiate a new instance of
kibana/src/core/server/elasticsearch/client/cluster_client.ts Lines 69 to 74 in ec0f582
We're definitely better off than we were before, but the problem isn't entirely resolved. |
Is this really an issue given these agents are sharing the same connection pool? |
They are? It's my understanding that each HTTP Agent is a connection pool. So, if we have multiple HTTP Agents we have multiple connection pools. Granted there are 3 instances of the HTTP Agent I saw being created for communication with Elastisearch, so the situation isn't abysmal, but it could be improved. |
There are distinct cc @delvedor could you help us here?
If the second point is confirmed, what would be the easiest option in your opinion to have all of our clients share the same connection pool? A custom |
From the docs:
They don't. https://github.com/elastic/elasticsearch-js/blob/main/src/client.ts#L220
Should we? Different clients can connect to different hosts, thus sockets won't be reused.
We can share a connection pool for the Core clients though (by using .child API, for example) kibana/src/core/server/elasticsearch/client/cluster_client.ts Lines 68 to 69 in 34bfbbf
|
Everything that @mshustov said. As result, you will see an agent instance per each connection created. It's also worth mentioning that the internal http agent uses a |
It's a good question. By default, all outbound HTTP requests that are made using the However, if the user has configured the monitoring ES hosts to be different than the normal ES hosts, it doesn't make much sense to reuse the same sockets. |
@gsoldevila, I think you already implemented this, right? Can we close this issue? |
Yes, even from monitoring we are calling AFAIK each Agent has one socket pool per origin, that's why it accepts a maxSockets (per origin) and also maxTotalSockets (global, all origins), so I think we're good to close this one. |
When investigating Kibana's performance for Fleet, I noticed that the legacy and the new Elasticsearch clients each handled the HTTP agent a bit differently and they each had their own HTTP agent.
As a result, we ended up with at least two separate pools of available sockets: one pool for the legacy client and one for the new client. When Kibana's communication to Elasticsearch occurs over TLS, there's a rather significant overhead to establishing these sockets, so the more ofter that we can reuse an already established socket over have to establish a new socket, the better.
I started on #79667 to address this issue, but I haven't found the time to push it across the line...
The text was updated successfully, but these errors were encountered: