From 17187c0c40ebc08a7ade7c154dc436861a4e9b38 Mon Sep 17 00:00:00 2001 From: Marcel Klehr Date: Thu, 10 Oct 2024 10:23:27 +0200 Subject: [PATCH] enh(AI): More additions to AI admin docs Signed-off-by: Marcel Klehr --- admin_manual/ai/ai_as_a_service.rst | 10 +++++++++- admin_manual/ai/app_assistant.rst | 15 +++++++++++++++ admin_manual/ai/app_llm2.rst | 11 ++++++++--- 3 files changed, 32 insertions(+), 4 deletions(-) diff --git a/admin_manual/ai/ai_as_a_service.rst b/admin_manual/ai/ai_as_a_service.rst index 14c75acfb86..6b98be31f28 100644 --- a/admin_manual/ai/ai_as_a_service.rst +++ b/admin_manual/ai/ai_as_a_service.rst @@ -11,8 +11,16 @@ Installation In order to use these providers you will need to install the respective app from the app store: -* ``integration_openai`` (With this application, you can also connect to a self-hosted LocalAI instance or to any service that implements an API similar to OpenAI, for example Plusserver or MistralAI.) +* ``integration_openai`` * ``integration_replicate`` You can then add your API token and rate limits in the administration settings and set the providers live in the "Artificial intelligence" section of the admins settings. + + +OpenAI integration +------------------ + +With this application, you can also connect to a self-hosted LocalAI or Ollama instance or to any service that implements an API similar enough to the OpenAI API, for example Plusserver or MistralAI. + +Do note however, that we test the Assistant tasks that this app implements only with OpenAI models and only against the OpenAI API, we thus cannot guarantee other models and APIs will work. \ No newline at end of file diff --git a/admin_manual/ai/app_assistant.rst b/admin_manual/ai/app_assistant.rst index 24e93134e0f..9082053d0aa 100644 --- a/admin_manual/ai/app_assistant.rst +++ b/admin_manual/ai/app_assistant.rst @@ -63,6 +63,20 @@ In order to make use of text processing features in the assistant, you will need * :ref:`llm2` - Runs open source AI language models locally on your own server hardware (Customer support available upon request) * *integration_openai* - Integrates with the OpenAI API to provide AI functionality from OpenAI servers (Customer support available upon request; see :ref:`AI as a Service`) +These apps currently implement the following Assistant Tasks: + +* *Generate text* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B) +* *Summarize* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B) +* *Generate headline* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B) +* *Extract topics* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B) + +Additionally, *integration_openai* also implements the following Assistant Tasks: + +* *Context write* (Tested with OpenAI GPT-3.5) +* *Reformulate text* (Tested with OpenAI GPT-3.5) + +These tasks may work with other models, but we can give no guarantees. + Text-To-Image ~~~~~~~~~~~~~ @@ -79,6 +93,7 @@ In order to make use of our special Context Chat feature, offering in-context in * :ref:`context_chat + context_chat_backend` - (Customer support available upon request) +You will also need a text processing provider as specified above (ie. llm2 or integration_openai). Configuration ------------- diff --git a/admin_manual/ai/app_llm2.rst b/admin_manual/ai/app_llm2.rst index a78d42ab494..3fbf049c506 100644 --- a/admin_manual/ai/app_llm2.rst +++ b/admin_manual/ai/app_llm2.rst @@ -6,10 +6,13 @@ App: Local large language model (llm2) The *llm2* app is one of the apps that provide text processing functionality using Large language models in Nextcloud and act as a text processing backend for the :ref:`Nextcloud Assistant app`, the *mail* app and :ref:`other apps making use of the core Text Processing API`. The *llm2* app specifically runs only open source models and does so entirely on-premises. Nextcloud can provide customer support upon request, please talk to your account manager for the possibilities. -This app uses `ctransformers `_ under the hood and is thus compatible with any model in *gguf* format. Output quality will differ depending on which model you use, we recommend the following models: +This app uses `llama.cpp `_ under the hood and is thus compatible with any model in *gguf* format. -* `Llama3 8b Instruct `_ (reasonable quality; fast; good acclaim; multilingual output may not be optimal) -* `Llama3 70B Instruct `_ (good quality; good acclaim; good multilingual output) +However, we only test with Llama 3.1. Output quality will differ depending on which model you use and downstream tasks like summarization or Context Chat may not work on other models. +We thus recommend the following models: + +* `Llama3.1 8b Instruct `_ (reasonable quality; fast; good acclaim; comes shipped with the app) +* `Llama3.1 70B Instruct `_ (good quality; good acclaim) Multilinguality --------------- @@ -27,6 +30,8 @@ Llama 3.1 `supports the following languages: