You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, @Padarn. Thanks for raising the question! That part is for merging checking results with different segments as references. Ideally, there should be no difference between selecting "Entailment" or "Contradiction" first, because we take a simplified assumption that the whole reference should be self-consistent. However, conflicts in reference do happen in real-world applications. It is an open research question for how to handle it. If you have any ideas, welcome to discuss in this thread.
As a temporary workaround, we might consider expose the option outside to let users choose precedence when conflicts happen. Thanks again!
What about providing a more nuanced score of 'agreement' rather than a binary classification? Probably having the LLM classify and then scoring in during aggregation would be better (to avoid calibration problems with the LLM score)
Perhaps just providing an 'Inconsistent' class when it is mixed would be easier to understand?
Hi there,
I see when you're merging in your
CheckerBase
you take 'Entailment' as a precedence?https://github.com/amazon-science/RefChecker/blob/64e7c34b5fd4f6af7a5227473458619a3d92ad5b/refchecker/checker/checker_base.py#L6C1-L23C21
I see there is a TODO there, but I'd have thought maybe the default would be any contradition would indicate a problem.
Just curious on the thought process to make sure I understand your approach.
Thanks!
The text was updated successfully, but these errors were encountered: