-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SVLS-3102] Send logs and metrics from the Lambda Extension to Vector/OPW #20640
Conversation
Bloop Bleep... Dogbot HereRegression Detector ResultsRun ID: 496a4b1f-289b-4cba-bdc0-daacb7787821 ExplanationA regression test is an integrated performance test for Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval. We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed. No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%. Fine details of change detection per experiment.
|
Thanks @DylanLovesCoffee for working on this PR. By curiosity, I can see this PR will handle logs and metrics, but is it plan to handle traces also ? If yes, in the PR or another one ? |
…/OPW (#20640) use OPW config when building logs and metrics endpoints in serverless
What does this PR do?
Allows the Lambda Extension to send logs and metrics to Vector when configured. Works for either
DD_VECTOR_*_URL
orDD_OBSERVABILITY_PIPELINES_WORKER_*_URL
.🚨note: logs forwarding was tested successfully with a custom build of Vector. To perform as expected, a new release of Vector with the appropriate changes are required.
Motivation
Allow the Extension to work with observability pipeline workers. Prior to this change, using DD OPW or Vector configs were not recognized at all.
DataDog/datadog-lambda-extension#174
Additional Notes
Example of logs and metrics from the Lambda Extension -> Vector -> DD, marked with
sender:vector
by way of the Vector transform configPossible Drawbacks / Trade-offs
Describe how to test/QA your changes
Tested manually + will be deployed as an RC to self-monitoring.
Reviewer's Checklist
Triage
milestone is set.major_change
label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.changelog/no-changelog
label has been applied.qa/skip-qa
label is not applied.team/..
label has been applied, indicating the team(s) that should QA this change.need-change/operator
andneed-change/helm
labels have been applied.k8s/<min-version>
label, indicating the lowest Kubernetes version compatible with this feature.