-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inbound fees: collect and display stats #113
Conversation
Name: inboundFeeBaseName, | ||
Help: inboundFeeBaseHelp, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe there's a better way to avoid the constants and extract the name and help text from the GraphCollector
instance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah as attributes? I think the constants fit better since they bind the name, label, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was more looking at the pattern that we use for other values
ch <- prometheus.MustNewConstMetric(
g.minTimelockDeltaDesc, prometheus.GaugeValue,
timelockReport.min,
)
where we take the description which is defined in a single place in NewGraphCollector
, but in this PR the constants are referenced in two places. We could use prometheus.MustNewConstHistogram
to follow that pattern, but that seems to be harder to use since we would need to keep track of buckets ourselves.
This lets us collect inbound fee metrics from the graph.
127cd89
to
b6c4352
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking pretty good!
Just one question about the initial range of one of the histogram buckets.
Name: inboundFeeBaseName, | ||
Help: inboundFeeBaseHelp, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah as attributes? I think the constants fit better since they bind the name, label, etc.
collectors/graph_collector.go
Outdated
Name: inboundFeeRateName, | ||
Help: inboundFeeRateHelp, | ||
Buckets: prometheus.ExponentialBuckets( | ||
1, 2, 13, // 1 to 8192 PPM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we increase this upper range? IIUC this goes up to 0.80% or 80 bps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 I put some random number here initially, but just checked that the 99-percentile at the moment is 10000 PPM. I bumped it up to an exponent of 15, giving 32768 PPM (~3 %).
We collect non-zero inbound base fees and fee rates in terms of histograms to track the adoption and values set. The histograms are separated into positive and negative portions.
b6c4352
to
f7938f7
Compare
Thank you for taking a look! Changed that initial range and also cross-checked that the number of negative inbound fee rate policies matches with a manual evaluation. |
@Roasbeef: review reminder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🥀
Adds support for tracking the adoption of inbound fees. This is done by collecting non-zero inbound fees based on advertised gossip channel updates using the custom records field in form of histograms for positive and negative values for base fees and fee rates. This lets us analyze negative/positive inbound fee percentiles as well as the overall number of policies that set inbound fees.
Fixes #107 (I have not added total amount of channel updates gauge, since this can be read off from the highest histogram bucket I think, see the dashboard).
Todo: