diff --git a/nemoguardrails/eval/README.md b/nemoguardrails/eval/README.md index f02fef5bd..e8b53936a 100644 --- a/nemoguardrails/eval/README.md +++ b/nemoguardrails/eval/README.md @@ -121,7 +121,7 @@ Results on _banking_ dataset, metric used is accuracy. In the Guardrails library, we provide two approaches out of the box for the fact-checking rail, these are colloquially referred to as AskLLM and AlignScore in the rest of this documentation. For more details read the [library guide](./../../docs/user_guides/guardrails-library.md). #### AskLLM -In this approach, the fact-checking rail is implemented as an entailment prediction problem. Given an evidence passage and the predicted answer, we prompt an LLM to predict yes/no to whether the answer grounded in the evidence or not. This is the default approach. +In this approach, the fact-checking rail is implemented as an entailment prediction problem. Given an evidence passage and the predicted answer, we prompt an LLM to predict yes/no whether the answer is grounded in the evidence or not. This is the default approach. #### AlignScore This approach is based on the AlignScore model [Zha et al. 2023](https://aclanthology.org/2023.acl-long.634.pdf). Given an evidence passage and the predicted answer, the model is finetuned to predict that they are aligned when: