You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bug Description
I am using ground truth agreement feedback function, and the when I print the result for that feedback, it is none. all other feedbacks work okay though.
To Reproduce
fromtrulens.providers.litellmimportLiteLLMfromtrulens.coreimportTruSession, Feedbackfromtrulens.feedbackimportGroundTruthAgreementfromtrulens.apps.llamaindeximportTruLlamatrulens_provider=Litellm(model)
qa_dataset= [{"query": "Who are you?", "response": "I am an ai Chatbot"}]
f_groundtruth=Feedback(
GroundTruthAgreement(
ground_truth=qa_dataset,
provider=trulens_provider
).agreement_measure, name="Answer Correctness"
).on_input_output()
tru_recorder=TruLlama(
query_engine,
app_name="Naive RAG Eval Llama",
app_version="1.0",
feedbacks=[f_groundtruth],
)
withtru_recorderasrecording:
query_engine.query(qa_dataset[0]["query"]);
rec=recording.get();
forfeedback, feedback_resultinrec.wait_for_feedback_results().items():
print(f"{feedback.name}: {feedback_result.result}")
Expected behavior
groundtruth feedback should provide the score for answer correctness in the result.
Environment:
OS: MacOS
Python Version: 3.12
TruLens version: 1.2.7
The text was updated successfully, but these errors were encountered:
Bug Description
I am using ground truth agreement feedback function, and the when I print the result for that feedback, it is none. all other feedbacks work okay though.
To Reproduce
Expected behavior
groundtruth feedback should provide the score for answer correctness in the result.
Environment:
The text was updated successfully, but these errors were encountered: