Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOC: <Issue related to /evaluation/how_to_guides/evaluation/fetch_perf_metrics_experiment> #510

Open
majorgilles opened this issue Nov 6, 2024 · 1 comment

Comments

@majorgilles
Copy link

Hello, in pratice, when we do

results = evaluate(
  lambda inputs: "Hello " + inputs["input"],
  data=dataset_name,
  evaluators=[foo_label],
  experiment_prefix="Hello",
)

resp = client.read_project(project_name=results.experiment_name, include_stats=True)

we get a not found error. If we use the evaluators project_name (a project name we are actually able to find under Tracing projects in the LangSmith UI), we do get results. What is happening here? Is the documentation outdated?

@hinthornw
Copy link
Collaborator

Thank you for reporting!

Running that code gives a "not found" error in the "read_project" call?

I just ran it again and it worked as expected:

import langsmith as ls

client = ls.Client()

dataset_name = "Evaluate Examples"


def foo_label(run, example):
    return {"score": 1}


results = ls.evaluate(
    lambda inputs: "Hello " + str(inputs),
    data=dataset_name,
    evaluators=[foo_label],
    experiment_prefix="Hello",
)

resp = client.read_project(project_name=results.experiment_name, include_stats=True)

How are you initializing your client?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants