-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High latency. #24
Comments
hi, @axot can you dump the request/server latency from stats? it makes no sense that high latency than the twitter/twemproxy while didn't modify the core flow of proxy forward. the |
Hello, we did not import twemproxy metrics exporter at that time, this is what I can share to you, this graph show the number of PHP-FPM processes of each container. When twemproxy got high latency, process will be increased much more. All config is same as the origin's, and we did not reload config. |
Hi @axot , the twemproxy metrics were counter and histogram, you also can find some clues why the twemproxy got high latency if you haven't restarted it. The histogram of the request meant the latency from the client-side(user-latency), and histogram of server meant the latency from the Redis server(latency between the twemproxy and Redis). the buckets were |
I'd like to ask how you calculate these latencies. I've always wanted to get the latencies data about Memcache. Can I get them through twoproxy statistics |
I tested this version in our large scale load test. Compare to original twitter/latency, it represents unstable high latency and caused many timeouts.
The config is same as when we using original one’s except the global section.
The text was updated successfully, but these errors were encountered: