Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Eunomia Integration Tests #389

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from

Conversation

LeonWehrhahn
Copy link
Contributor

Motivation and Context

Evaluating the quality of LLM-generated feedback for modeling exercises is challenging due to the variability and subjectivity of natural language. Traditional evaluation methods struggle with the nuances of feedback expression. Human evaluation, while possible, is time-consuming, expensive, and inconsistent. This PR introduces the Eunomia Integration test address these challenges by providing an automated and consistent framework for evaluating LLM feedback generation. The integration test leverages predefined Structured Grading Instructions to transform the evaluation into a comparison of instruction IDs, allowing for scalable and objective assessments.

Description

The implementation is outlined in detail in the README of the integration test.

Steps for Testing

Testserver States

Note

These badges show the state of the test servers.
Green = Currently available, Red = Currently locked
Click on the badges to get to the test servers.


Screenshots

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant