diff --git a/doc/examples.md b/doc/examples.md
index 50ad65cc..ea8abf22 100644
--- a/doc/examples.md
+++ b/doc/examples.md
@@ -1,3 +1,4 @@
# Examples
-- [ ] [[share-services]]#
+- [[share-services]]#
+- [[llm]]#
diff --git a/doc/llm.md b/doc/llm.md
new file mode 100644
index 00000000..da7bc6dd
--- /dev/null
+++ b/doc/llm.md
@@ -0,0 +1,62 @@
+---
+page:
+ image: llm.png
+template:
+ toc:
+ enable: false
+---
+
+# Local AI chatbot
+
+The [`llm` example][source] allows you to run advanced AI chatbots and services on your own computer with just one command. Once you've downloaded the model, you can use it without needing a constant internet connection.
+
+![[llm.png]]
+
+> [!tip] On dev vs app mode
+>
+> **services-flake** provides two main uses:
+>
+> 1. Running services in development projects with source code access.
+> 1. Creating end-user *apps* that run multiple services.
+>
+> Our example is based on the second use. These *apps* can be launched with `nix run` or installed using `nix profile install`.
+
+{#run}
+## Running the app
+
+To run the local AI chatbot and launch the Web UI,
+
+```sh
+# You can also use `nix profile install` on this URL, and run `services-flake-llm`
+nix run "github:juspay/services-flake?dir=example/llm"
+```
+
+Before launching the Web UI, this will download the [`phi3`] model, which is about 2.4GB. To reduce or avoid this delay, you can:
+
+1. Choose a different model, or
+2. Use no model at all
+
+See further below for more options.
+
+### Demo
+
+
+
+
+
+{#default-config}
+## Default configuration & models
+
+The [example][source] runs two processes [[ollama]] and [[open-webui]]
+
+Key points:
+
+1. **Data storage:**
+ - Ollama data is stored in `$HOME/.services-flake/llm/ollama`
+ - To change this location, edit the `dataDir` option in `flake.nix`
+2. **Model management**:
+ - By default, the [`phi3`] model is automatically downloaded
+ - To change or add models: a. Edit the `models` option in `flake.nix` b. Use the open-webui interface to download additional models.
+
+[`phi3`]: https://ollama.com/library/phi3
+[source]: https://github.com/juspay/services-flake/tree/main/example/llm
\ No newline at end of file
diff --git a/doc/llm.png b/doc/llm.png
new file mode 100644
index 00000000..76856e0b
Binary files /dev/null and b/doc/llm.png differ
diff --git a/example/llm/README.md b/example/llm/README.md
index 85d34827..0f30d3c1 100644
--- a/example/llm/README.md
+++ b/example/llm/README.md
@@ -1,18 +1 @@
-# Running local LLM using ollama and open-webui
-
-While `services-flake` is generally used for running services in a *development* project, typically under a source code checkout, you can also write flakes to derive an end-user app which runs a group of services, which then can be run using `nix run` (or installed using `nix profile install`):
-
-```sh
-# You can also use `nix profile install` on this URL, and run `services-flake-llm`
-nix run "github:juspay/services-flake?dir=example/llm"
-```
-
->[!NOTE]
->This will download about 9GB of data before launching the Web UI. You can choose a different model or no model (see below) to minimize or avoid this delay.
-
-## Default configuration & models
-
-`example/llm` runs two processes ollama and open-webui
-
-- The ollama data is stored under `$HOME/.services-flake/llm/ollama`. You can change this path in `flake.nix` by setting the `dataDir` option.
-- A single model ([`deepseek-coder-v2`](https://ollama.com/library/deepseek-coder-v2)) is automatically downloaded. You can modify this in `flake.nix` as well by setting the `models` option. You can also download models in the open-webui UI.
+See https://community.flake.parts/services-flake/llm
diff --git a/example/llm/flake.nix b/example/llm/flake.nix
index 4d7d2045..c7ce5582 100644
--- a/example/llm/flake.nix
+++ b/example/llm/flake.nix
@@ -36,7 +36,7 @@
# models manually in the UI.
#
# Search for the models here: https://ollama.com/library
- models = [ "deepseek-coder-v2" ];
+ models = [ "phi3" ];
};
# Get ChatGPT like UI, but open-source, with Open WebUI