Skip to content

Commit

Permalink
Fix javadoc for HTML elements
Browse files Browse the repository at this point in the history
  • Loading branch information
ilayaperumalg committed Nov 6, 2024
1 parent c93c6fd commit cbdb578
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 8 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -23,19 +23,19 @@
/**
* Implementation of {@link Evaluator} used to evaluate the factual accuracy of Large
* Language Model (LLM) responses against provided context.
* <p/>
* <p>
* This evaluator addresses a specific type of potential error in LLM outputs known as
* "hallucination" in the context of grounded factuality. It verifies whether a given
* statement (the "claim") is logically supported by a provided context (the "document").
* <p/>
* <p>
* Key concepts: - Document: The context or grounding information against which the claim
* is checked. - Claim: The statement to be verified against the document.
* <p/>
* <p>
* The evaluator uses a prompt-based approach with a separate, typically smaller and more
* efficient LLM to perform the fact-checking. This design choice allows for
* cost-effective and rapid verification, which is crucial when evaluating longer LLM
* outputs that may require multiple verification steps.
* <p/>
* <p>
* Implementation note: For efficient and accurate fact-checking, consider using
* specialized models like Bespoke-Minicheck, a grounded factuality checking model
* developed by Bespoke Labs and available in Ollama. Such models are specifically
Expand All @@ -45,12 +45,12 @@
* Hallucinations with Bespoke-Minicheck</a> and the research paper:
* <a href="https://arxiv.org/pdf/2404.10774v1">MiniCheck: An Efficient Method for LLM
* Hallucination Detection</a>
* <p/>
* <p>
* Note: This evaluator is specifically designed to fact-check statements against given
* information. It's not meant for other types of accuracy tests, like quizzing an AI on
* obscure facts without giving it any reference material to work with (so-called 'closed
* book' scenarios).
* <p/>
* <p>
* The evaluation process aims to determine if the claim is supported by the document,
* returning a boolean result indicating whether the fact-check passed or failed.
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ static FunctionCallingOptionsBuilder builder() {
void setFunctionCallbacks(List<FunctionCallback> functionCallbacks);

/**
* @return <@link Set> of function names from the ChatModel registry to be used in the
* @return {@link Set} of function names from the ChatModel registry to be used in the
* next chat completion requests.
*/
Set<String> getFunctions();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@

/**
* Enumeration of metric names used in AI observations.
* <p/>
* <p>
* Based on OpenTelemetry's Semantic Conventions for AI systems.
*
* @author Thomas Vitale
Expand Down

0 comments on commit cbdb578

Please sign in to comment.