Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[stable30] enh(AI): More additions to AI admin docs #12272

Open
wants to merge 1 commit into
base: stable30
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 9 additions & 1 deletion admin_manual/ai/ai_as_a_service.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,16 @@ Installation

In order to use these providers you will need to install the respective app from the app store:

* ``integration_openai`` (With this application, you can also connect to a self-hosted LocalAI instance or to any service that implements an API similar to OpenAI, for example Plusserver or MistralAI.)
* ``integration_openai``

* ``integration_replicate``

You can then add your API token and rate limits in the administration settings and set the providers live in the "Artificial intelligence" section of the admins settings.


OpenAI integration
------------------

With this application, you can also connect to a self-hosted LocalAI or Ollama instance or to any service that implements an API similar enough to the OpenAI API, for example Plusserver or MistralAI.

Do note however, that we test the Assistant tasks that this app implements only with OpenAI models and only against the OpenAI API, we thus cannot guarantee other models and APIs will work.
15 changes: 15 additions & 0 deletions admin_manual/ai/app_assistant.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,20 @@ In order to make use of text processing features in the assistant, you will need
* :ref:`llm2<ai-app-llm2>` - Runs open source AI language models locally on your own server hardware (Customer support available upon request)
* *integration_openai* - Integrates with the OpenAI API to provide AI functionality from OpenAI servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)

These apps currently implement the following Assistant Tasks:

* *Generate text* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
* *Summarize* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
* *Generate headline* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
* *Extract topics* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)

Additionally, *integration_openai* also implements the following Assistant Tasks:

* *Context write* (Tested with OpenAI GPT-3.5)
* *Reformulate text* (Tested with OpenAI GPT-3.5)

These tasks may work with other models, but we can give no guarantees.

Text-To-Image
~~~~~~~~~~~~~

Expand All @@ -79,6 +93,7 @@ In order to make use of our special Context Chat feature, offering in-context in

* :ref:`context_chat + context_chat_backend<ai-app-context_chat>` - (Customer support available upon request)

You will also need a text processing provider as specified above (ie. llm2 or integration_openai).

Configuration
-------------
Expand Down
11 changes: 8 additions & 3 deletions admin_manual/ai/app_llm2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,13 @@ App: Local large language model (llm2)

The *llm2* app is one of the apps that provide text processing functionality using Large language models in Nextcloud and act as a text processing backend for the :ref:`Nextcloud Assistant app<ai-app-assistant>`, the *mail* app and :ref:`other apps making use of the core Text Processing API<tp-consumer-apps>`. The *llm2* app specifically runs only open source models and does so entirely on-premises. Nextcloud can provide customer support upon request, please talk to your account manager for the possibilities.

This app uses `ctransformers <https://github.com/marella/ctransformers>`_ under the hood and is thus compatible with any model in *gguf* format. Output quality will differ depending on which model you use, we recommend the following models:
This app uses `llama.cpp <https://github.com/abetlen/llama-cpp-python>`_ under the hood and is thus compatible with any model in *gguf* format.

* `Llama3 8b Instruct <https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF>`_ (reasonable quality; fast; good acclaim; multilingual output may not be optimal)
* `Llama3 70B Instruct <https://huggingface.co/QuantFactory/Meta-Llama-3-70B-Instruct-GGUF>`_ (good quality; good acclaim; good multilingual output)
However, we only test with Llama 3.1. Output quality will differ depending on which model you use and downstream tasks like summarization or Context Chat may not work on other models.
We thus recommend the following models:

* `Llama3.1 8b Instruct <https://huggingface.co/QuantFactory/Meta-Llama-3.1-8B-Instruct-GGUF>`_ (reasonable quality; fast; good acclaim; comes shipped with the app)
* `Llama3.1 70B Instruct <https://huggingface.co/bartowski/Meta-Llama-3.1-70B-Instruct-GGUF>`_ (good quality; good acclaim)

Multilinguality
---------------
Expand All @@ -27,6 +30,8 @@ Llama 3.1 `supports the following languages: <https://huggingface.co/meta-llama/
* Hindi
* Thai

Note, that other languages may work as well, but only the above languages are guaranteed to work.

Requirements
------------

Expand Down
Loading