Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add support for pointing lambda metrics to generic downstream o… #26

Merged
merged 2 commits into from
May 28, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 77 additions & 1 deletion src/goodmetrics/metricsSetups.ts
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,41 @@
onSendUnary?: (metrics: Metrics[]) => void;
}

interface RawNativeLambdaOtlpForLambdaProps {
/**
* programmatic access token for the otlp metric backend
*/
accessToken: string;
/**
* Name of the header to use for authentication. Ex. `api-token`
*/
authHeaderName: string;
/**
* Included resource dimensions on the OTLP resource. Ex. AWS_REGION, ACCOUNT_ID etc...
*/
resourceDimensions: Map<string, Dimension>;
/**
* Include resource dimensions on each metric instead of on the Resource. You'd use this for
* downstreams that either do not support or do something undesirable with Resource dimensions.
*/
sharedDimensions: Map<string, Dimension>;
/**
* example `ingest.lightstep.com`
*/
ingestUrl: string;
/**
* defaults to 443
*/
ingestPort?: number;
logError: (message: string, error: unknown) => void;
/**
* Mostly for debugging purposes, logs after successfully sending metrics to the backend.
* Used to tell if the promise fully resolved
*/
doLogSuccess?: boolean;
onSendUnary?: (metrics: Metrics[]) => void;
}

interface ConfigureBatchedUnaryLightstepSinkProps {
batchSize: number;
batchMaxAgeSeconds: number;
Expand Down Expand Up @@ -165,7 +200,7 @@
}

/**
* Configures a unary metric factory which will send and record metrics upon lambda
* Configures a unary metric factory pointing to lightstep downstream, which will send and record metrics upon lambda
* completion
* @param props
*/
Expand Down Expand Up @@ -205,6 +240,47 @@
});
}

/**
* Configures a unary metric factory pointing to any arbitrary oltp metrics backend, which will send and record metrics upon lambda
* completion
* @param props
*/
static rawNativeOtlpButItSendsMetricsUponRecordingForLambda(
props: RawNativeLambdaOtlpForLambdaProps
): MetricsFactory {
const headers = [

Check failure on line 251 in src/goodmetrics/metricsSetups.ts

View workflow job for this annotation

GitHub Actions / build

Replace `⏎······new·Header(props.authHeaderName,·props.accessToken),⏎····` with `new·Header(props.authHeaderName,·props.accessToken)`
new Header(props.authHeaderName, props.accessToken),
];
const client = OpenTelemetryClient.connect({
sillyOtlpHostname: props.ingestUrl,
port: props.ingestPort ?? 443,
metricDimensions: props.sharedDimensions,
resourceDimensions: props.resourceDimensions,
interceptors: [
new HeaderInterceptorProvider(headers).createHeadersInterceptor(),
],
});
const unarySink: MetricsSink = {
close(): void {
client.close();
},
async emit(metrics: _Metrics): Promise<void> {
props?.onSendUnary && props.onSendUnary([metrics]);
try {
await client.sendMetricsBatch([metrics]);
props.doLogSuccess && console.log('metrics sent to backend');
} catch (e) {
props.logError('error while sending blocking metrics', e);
}
},
};

return new MetricsFactory({
metricsSink: unarySink,
totalTimeType: TotaltimeType.DistributionMilliseconds,
});
}

private static configureBatchedUnaryLightstepSink(
props: ConfigureBatchedUnaryLightstepSinkProps
): SynchronizingBuffer {
Expand Down
Loading