diff --git a/_posts/2024-08-29-llms-dont-hallucinate.md b/_posts/2024-08-29-llms-dont-hallucinate.md
index ef6a002b27..c6459c188d 100644
--- a/_posts/2024-08-29-llms-dont-hallucinate.md
+++ b/_posts/2024-08-29-llms-dont-hallucinate.md
@@ -14,17 +14,16 @@ LLMs can confidently tell you all about the winners of non-existent sporting
events[^7], They can invent legal cases[^8], and fabricate fake science[^9].
These kinds of outputs are often called **hallucinations**.
-
-
+
+
-
+
A desert mirage.
- Photo credit: Ashabot, Wikimedia ,
- CC BY-SA 4.0
+ Photo credit: Ashabot, Wikimedia ,
+ CC BY-SA 4.0
@@ -46,17 +45,16 @@ But is it so simple? A growing chorus of academics, engineers and journalists
are calling this narrative into question.[^3] **Maybe hallucination isn't a
solvable problem at all.**
-
-
+
+
-
+
Taken at the AI Safety Summit hosted at Bletchley Park, Nov 2023.
- Photo credit: Marcel Grabowski ,
- CC BY-NC-ND 2.0
+ Photo credit: Marcel Grabowski ,
+ CC BY-NC-ND 2.0
@@ -132,16 +130,15 @@ These points are pretty subtle, so let's explore a couple of examples.
Think about dice. I could roll six sixes in a row, and that would be really
unusual behaviour for the die.
-
-
+
+
-
- Photo credit: barefootliam-stock ,
- CC BY 3.0
+
+ Photo credit: barefootliam-stock ,
+ CC BY 3.0
@@ -159,18 +156,17 @@ sixes in a row.
In contrast, if I secure a picture to the wall with a picture hook, and it
falls off, I can reasonably expect there to be an explanation.
-
-
+
+
-
+
A picture hook behaving abnormally. Mind you, the chap on the ladder
appears to have bigger problems right now.
- Photo credit: lookandlearn.com ,
- CC BY 4.0
+ Photo credit: lookandlearn.com ,
+ CC BY 4.0
@@ -220,17 +216,16 @@ faithfulness. They are not trained to target truth. They are trained to guess
the most plausible next word in the given sequence. So the case where the LLM
'hallucinates' and the case where it doesn't are causally indistinguishable.
-
-
+
+
-
+
The solution given by LLMs is new. The problem of predictive text is not.
- Photo credit: Richard Masoner ,
- CC BY-SA 2.0
+ Photo credit: Richard Masoner ,
+ CC BY-SA 2.0
@@ -307,19 +302,18 @@ humans who were really responsible. And where there's a lack of accountability,
organisations are unable to make good decisions or learn from their
mistakes[^2].
-
-
+
+
-
+
Just as crumple zones in a car is designed to absorb the impact of a
collision to protect its driver, so misplaced and misunderstood LLMs
could act to absorb responsibility from decision-makers.
- Photo credit: Karrmann, Wikimedia ,
- CC BY-SA 3.0
+ Photo credit: Karrmann, Wikimedia ,
+ CC BY-SA 3.0
@@ -385,17 +379,16 @@ can convince my crush that I'm a sophisticated gentleman that knows his Islays
from his Arrans, she just might consider going on a date with me. Actually
saying something true or meaningful about whisky is not the task at hand.
-
-
+
+
-
+
I like the long notes of smoke and spices in this one. Mmm, gingery.
- Photo credit: Pixabay ,
- CC CC0 1.0
+ Photo credit: Pixabay ,
+ CC CC0 1.0