Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(example/llm): init #254

Merged
merged 6 commits into from
Jul 3, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion doc/examples.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Examples

- [ ] [[share-services]]#
- [[share-services]]#
- [[llm]]#
27 changes: 27 additions & 0 deletions doc/llm.md
shivaraj-bh marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
page:
image: llm.png
shivaraj-bh marked this conversation as resolved.
Show resolved Hide resolved
---

# Running local LLM using ollama and open-webui

While `services-flake` is generally used for running services in a *development* project, typically under a source code checkout, you can also write flakes to derive an end-user app which runs a group of services, which then can be run using `nix run` (or installed using `nix profile install`):
shivaraj-bh marked this conversation as resolved.
Show resolved Hide resolved

```sh
# You can also use `nix profile install` on this URL, and run `services-flake-llm`
nix run "github:juspay/services-flake?dir=example/llm"
```

<center>
<blockquote class="twitter-tweet" data-media-max-width="560"><p lang="en" dir="ltr">Want to run your own AI chatbot (like ChatGPT) locally? You can do that with Nix.<br><br>Powered by services-flake (<a href="https://twitter.com/hashtag/NixOS?src=hash&amp;ref_src=twsrc%5Etfw">#NixOS</a>), using <a href="https://twitter.com/OpenWebUI?ref_src=twsrc%5Etfw">@OpenWebUI</a> and <a href="https://twitter.com/ollama?ref_src=twsrc%5Etfw">@ollama</a>. <br><br>See example: <a href="https://t.co/dyItC93Pya">https://t.co/dyItC93Pya</a> <a href="https://t.co/DeDow8bEPw">pic.twitter.com/DeDow8bEPw</a></p>&mdash; NixOS Asia (@nixos_asia) <a href="https://twitter.com/nixos_asia/status/1803065244568244578?ref_src=twsrc%5Etfw">June 18, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</center>

>[!note]
>This will download about 9GB of data before launching the Web UI. You can choose a different model or no model (see below) to minimize or avoid this delay.
shivaraj-bh marked this conversation as resolved.
Show resolved Hide resolved

## Default configuration & models

`example/llm` runs two processes [[ollama]] and [[open-webui]]

- The ollama data is stored under `$HOME/.services-flake/llm/ollama`. You can change this path in `flake.nix` by setting the `dataDir` option.
- A single model ([`deepseek-coder-v2`](https://ollama.com/library/deepseek-coder-v2)) is automatically downloaded. You can modify this in `flake.nix` as well by setting the `models` option. You can also download models in the open-webui UI.
shivaraj-bh marked this conversation as resolved.
Show resolved Hide resolved
Binary file added doc/llm.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.