Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update: latencies.kong description #7145

Merged
merged 1 commit into from
Apr 4, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions app/_includes/md/plugins-hub/json-object-log.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,23 @@
* `request`: Properties about the request sent by the client.
* `response`: Properties about the response sent to the client.
* `latencies`: Latency data.
{% if_plugin_version gte:3.7.x %}
* `kong`: The internal {{site.base_gateway}} latency, in milliseconds, that it takes to process the request.
* For requests that are proxied to an upstream, it is equivalent to the `X-Kong-Proxy-Latency` [response header](/gateway/latest/reference/configuration/#headers).
* For requests that generate a response within {{ site.base_gateway }} (typically the result of an error or a plugin-generated response), it is equivalent to the `X-Kong-Response-Latency` [response header](/gateway/latest/reference/configuration/#headers).
* `request`: The time in milliseconds that has elapsed between when the first bytes were read from the client and the last byte was sent to the client. This is useful for detecting slow clients.
* `proxy`: The time in milliseconds that it took for the upstream to process the request. In other words, it's the time elapsed between transferring the
request to the final service and when {{site.base_gateway}} starts receiving the response.
* `receive`: The time in milliseconds that it took to receive and process the response (headers and body) from the upstream.
{% endif_plugin_version %}
{% if_plugin_version lte:3.6.x %}
* `kong`: The internal {{site.base_gateway}} latency, in milliseconds, that it takes to process the request. It varies based on the actual processing flow. Generally, it consists of three parts:
* The time it took to find the right upstream.
* The time it took to receive the whole response from upstream.
* The time it took to run all plugins executed before the log phase.
* `request`: The time in milliseconds that has elapsed between when the first bytes were read from the client and the last byte was sent to the client. This is useful for detecting slow clients.
* `proxy`: The time in milliseconds that it took for the upstream to process the request. In other words, it's the time elapsed between transferring the
request to the final service and when {{site.base_gateway}} starts receiving the response.
{% if_plugin_version gte:3.7.x %}
* `receive`: The time in milliseconds that it took to receive and process the response (headers and body) from the upstream.
{% endif_plugin_version %}
* `tries`: a list of iterations made by the load balancer for this request.
* `balancer_start`: A Unix timestamp for when the balancer started.
Expand Down
Loading