-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add telemetry support? #153
Comments
I'm open to telemetry support. I'm not very experienced in what makes a good telemetry event and would love feedback, PRs, etc. |
I was just wondering why there is no support for it yet :D I added this on top op the getting started livebook and it's enough to see what is sent but it would be better to have dedicated langchain events: frame = Kino.Frame.new()
defmodule LiveTelemetryHandler do
@frame frame
def handle_event([:finch, :request, :start] = event, measurements, metadata, _config) do
content = [
Kino.Markdown.new("### Event: `#{inspect(event)}`"),
Kino.Markdown.new("#### Measurements:"),
Kino.Text.new(pretty_print(measurements)),
Kino.Markdown.new("#### Metadata:"),
Kino.Text.new(pretty_print(IO.iodata_to_binary(metadata.request.body))),
Kino.Markdown.new("---")
]
Enum.each(content, &Kino.Frame.append(@frame, &1))
end
def handle_event([:finch, :request, :stop] = event, measurements, metadata, _config) do
content = [
Kino.Markdown.new("### Event: `#{inspect(event)}`"),
Kino.Markdown.new("#### Measurements:"),
Kino.Text.new(pretty_print(measurements)),
Kino.Markdown.new("#### Metadata:"),
#Kino.Text.new(elem(metadata.result, 1).body),
Kino.Markdown.new("---")
]
Enum.each(content, &Kino.Frame.append(@frame, &1))
end
defp pretty_print(data) do
data
|> inspect(pretty: true, width: 60)
|> String.split("\n")
|> Enum.map_join("\n", &(" " <> &1))
end
end
:telemetry.attach_many("live-telemetry", [
[:finch, :request, :start],
[:finch, :request, :stop]
# Add more event names as needed
], &LiveTelemetryHandler.handle_event/4, nil)
frame |
I guess the callbacks don't return for now the timestamps for the calls to OpenAI or other LLMs? Looking for that to send it to Langfuse, Langsmith or other observability tools. |
Hi @georgeguimaraes! The APIs themselves don't return a server created timestamp, so there is nothing to return. The callbacks fire as they happen, so it you want or need a timestamp, just generate it at that time. |
@brainlid Do you think it would be useful to add telemetry at this point?
I imagine emitting telemetry events for with duration of response cycle, token usage, errors.
If you think it is a good idea, I could work on a PR.
Thanks for the work!
Originally posted by @tubedude in #103 (reply in thread)
The text was updated successfully, but these errors were encountered: