-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vector sink invalid value for component_sent_bytes_total
metric
#20610
Comments
Hi @nmiculinic ! Thanks for opening this! I think there is some understandable confusion here. We would like to implement a new metric that indicates the network bytes (described here), but this is not yet implemented. Does that help clear things up? |
@jszwedko yes it does, since I do observe the same for kafka sink as well. I'm reading the spec, and:
given vector is gRPC, which is HTTP-based protocol, shouldn't the metric be after compression, same for HTTP? Similarly for kafka, it's TCP based protocol, and I'd assume it'd be For clickhouse sink (TCP based protocol) I do see this as expected. |
Ah, good catch. The spec contradicts itself there. I believe the bit about matching |
Closing this since the spec has been updated. Let me know if you have any other questions! |
A note for the community
Problem
I am analysing network throughput for the vector pods. When querying prometheus for:
I get rates of 8MB/s.
However when I see kubernetes pod network transmit bandwidth, the rate are much much smaller around ~150kB/s.
I'm using https://vector.dev/docs/reference/configuration/sinks/vector with compression enabled, thus as per:
I'd assume this are raw bytes sent on the network interface. This appears to be true (or feasible) for the HTTP sink https://vector.dev/docs/reference/configuration/sinks/http/ but not for vector one. It seems as if the value are pre-compression, and not post-compression
Configuration
Version
0.38.0
Debug Output
No response
Example Data
No response
Additional Context
No response
References
No response
The text was updated successfully, but these errors were encountered: