You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tried models: openai models: [gpt4, gpt-4o-mini], multiple together models
Error:
File "python3.10/site-packages/ares/RAG_Automatic_Evaluation/LLMJudge_RAG_Compared_Scoring.py", line 729, in evaluate_model
iftotal_references.nelement() > 0:
AttributeError: 'numpy.ndarray' object has no attribute 'nelement'
If I use default model provided checkpoint with config, It works successfully.
I am using ares evaluator with the given data in the docs to evaluate using prediction powered inference
Tried models: openai models: [gpt4, gpt-4o-mini], multiple together models
Error:
If I use default model provided checkpoint with config, It works successfully.
Do I need to provide the checkpoints every time ?
The text was updated successfully, but these errors were encountered: