From 543a418472d3b0a65310b026661f88b7d367d439 Mon Sep 17 00:00:00 2001 From: Cory Zue Date: Tue, 14 May 2024 09:12:30 +0200 Subject: [PATCH 1/7] add 2024.5.1 release notes --- release-notes.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/release-notes.md b/release-notes.md index 61cdb57..f681cd6 100644 --- a/release-notes.md +++ b/release-notes.md @@ -3,6 +3,16 @@ Version History and Release Notes Releases of [SaaS Pegasus: The Django SaaS Boilerplate](https://www.saaspegasus.com/) are documented here. +## Version 2024.5.1 + +This is a hotfix release that fixes issues running the [experimental React frontend](./experimental/react-front-end.md) +in Docker. Thanks Mohamed for reporting this! + +- Fix `api-client` path in the frontend docker container. +- Mount `node_modules` as an anonymous volume in the frontend docker container, so it is not overwritten. + +*May 14, 2024* + ## Version 2024.5 This is a major release with several big updates. From 0cf0d858920a7bd2476aaa464ef30a3a38cf5f6f Mon Sep 17 00:00:00 2001 From: Cory Zue Date: Tue, 14 May 2024 09:47:21 +0200 Subject: [PATCH 2/7] update release notes --- release-notes.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/release-notes.md b/release-notes.md index f681cd6..904ba1f 100644 --- a/release-notes.md +++ b/release-notes.md @@ -8,8 +8,9 @@ Releases of [SaaS Pegasus: The Django SaaS Boilerplate](https://www.saaspegasus. This is a hotfix release that fixes issues running the [experimental React frontend](./experimental/react-front-end.md) in Docker. Thanks Mohamed for reporting this! -- Fix `api-client` path in the frontend docker container. +- Fix `api-client` path in the frontend docker container and add to `optimizeDeps` in vite config. - Mount `node_modules` as an anonymous volume in the frontend docker container, so it is not overwritten. +- Automatically create `./frontend/.env` when running `make init` if it doesn't exist. *May 14, 2024* From 56c36b122ab3cb7f3e8b10d4ba97a9eaa3039ecc Mon Sep 17 00:00:00 2001 From: Cory Zue Date: Thu, 16 May 2024 13:28:59 +0200 Subject: [PATCH 3/7] 2024.5.2 release notes --- release-notes.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/release-notes.md b/release-notes.md index 904ba1f..234dc6f 100644 --- a/release-notes.md +++ b/release-notes.md @@ -3,6 +3,17 @@ Version History and Release Notes Releases of [SaaS Pegasus: The Django SaaS Boilerplate](https://www.saaspegasus.com/) are documented here. +## Version 2024.5.2 + +This is a hotfix release that fixes a bug that prevented the team management page +from loading in certain browsers if you built with a React front end and with translations enabled. +Thanks Finbar for reporting! + +- Added `defer` keyword to various bundle scripts so they are loaded after the JavaScript translation catalog. +- Updated references to `SiteJS` to run on the `DOMContentLoaded` event to allow for usage of the `defer` tag. + +*May 16, 2024* + ## Version 2024.5.1 This is a hotfix release that fixes issues running the [experimental React frontend](./experimental/react-front-end.md) From 30b76b0e1e1450fd0898f10078324c253124c9e9 Mon Sep 17 00:00:00 2001 From: Cory Zue Date: Sat, 18 May 2024 14:41:15 +0200 Subject: [PATCH 4/7] fix command name --- subscriptions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/subscriptions.md b/subscriptions.md index 501cae6..cabc0a8 100644 --- a/subscriptions.md +++ b/subscriptions.md @@ -354,7 +354,7 @@ an officially supported workflow. When changes are made that impact a user's pricing, you will need to notify Stripe of the change. This should happen automatically every 24 hours as long as you have enabled celery and celerybeat. -You can also trigger it manually via a management command `./manage.py sync_subscriptions`. +You can also trigger it manually via a management command `./manage.py djstripe_sync_models subscription`. To ensure this command works properly, you must implement two pieces of business logic: From 34bf25f30d1269fc5aab4691b480054a49a9b70f Mon Sep 17 00:00:00 2001 From: Cory Zue Date: Sat, 18 May 2024 14:41:56 +0200 Subject: [PATCH 5/7] revert --- subscriptions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/subscriptions.md b/subscriptions.md index cabc0a8..501cae6 100644 --- a/subscriptions.md +++ b/subscriptions.md @@ -354,7 +354,7 @@ an officially supported workflow. When changes are made that impact a user's pricing, you will need to notify Stripe of the change. This should happen automatically every 24 hours as long as you have enabled celery and celerybeat. -You can also trigger it manually via a management command `./manage.py djstripe_sync_models subscription`. +You can also trigger it manually via a management command `./manage.py sync_subscriptions`. To ensure this command works properly, you must implement two pieces of business logic: From 3009a9e1f26cb11c28e62e5d46bbf9b1a9117f4c Mon Sep 17 00:00:00 2001 From: Simon Kelly Date: Mon, 20 May 2024 10:17:28 +0200 Subject: [PATCH 6/7] update docs to reference litellm --- ai.md | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/ai.md b/ai.md index 41600a6..6640b74 100644 --- a/ai.md +++ b/ai.md @@ -12,15 +12,15 @@ This section covers how it works and the various supported options. You can choose between two options for your LLM chat: OpenAI and LLM (generic). The OpenAI option limits you to OpenAI models, but supports streaming and asynchronous API access. -The generic "LLM" option uses the [llm library](https://github.com/simonw/llm) and can be used with many different -models---including local ones. However, it does not yet support streaming responses. +The generic "LLM" option uses the [litellm library](https://docs.litellm.ai/docs/) and can be used with many different +models---including local ones. We recommend choosing "OpenAI" unless you know you want to use a different model. ### Configuring OpenAI If you're using OpenAI, you need to set `OPENAI_API_KEY` in your environment or settings file (`.env` in development). -You can also change the model used by setting `OPENAI_MODEL`, which defualts to `"gpt-3.5-turbo"`. +You can also change the model used by setting `OPENAI_MODEL`, which defaults to `"gpt-3.5-turbo"`. See [this page](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key) for help finding your OpenAI API key. @@ -32,20 +32,19 @@ values in your `settings.py`. For example: ```python LLM_MODELS = { - "gpt4": {"key": env("OPENAI_API_KEY", default="")}, - "claude-3-opus": {"key": env("ANTHROPIC_API_KEY", default="")}, # requires llm-claude-3 - "Meta-Llama-3-8B-Instruct": {}, # requires llm-gpt4all + "gpt-3.5-turbo": {"api_key": env("OPENAI_API_KEY", default="")}, + "gpt4": {"api_key": env("OPENAI_API_KEY", default="")}, + "claude-3-opus-20240229": {"api_key": env("ANTHROPIC_API_KEY", default="")}, + "ollama_chat/llama3": {"api_base": env("OLLAMA_API_BASE", default="http://localhost:11434")}, # requires a running ollama instance } -DEFAULT_LLM_MODEL = "gpt4" +DEFAULT_LLM_MODEL = env("DEFAULT_LLM_MODEL", default="gpt4") ``` The chat UI will use whatever is set in `DEFAULT_LLM_MODEL` out-of-the-box, but you can quickly change it to another model to try different options. -Any models that you add will need to be installed as [llm plugins](https://llm.datasette.io/en/stable/plugins/index.html). -You can do this by putting them in your requirements files, [as outlined here](./python.md#adding-or-removing-a-package). -For example, to use Claude 3 you need to add the [`llm-claude-3` plugin](https://github.com/simonw/llm-claude-3), -and to use local models like Llama 3, you need [`llm-gpt4all`](https://github.com/simonw/llm-gpt4all). +For further reading, see the documentation of the [litellm Python API](https://docs.litellm.ai/docs/completion), +and [litellm providers](https://docs.litellm.ai/docs/providers). For further reading, see the documentation of the [llm Python API](https://llm.datasette.io/en/stable/python-api.html), and [llm generally](https://llm.datasette.io/en/stable/index.html). From 507f5c49bec5c081e6c8baf1f060a1121349fcd5 Mon Sep 17 00:00:00 2001 From: Simon Kelly Date: Mon, 20 May 2024 10:17:37 +0200 Subject: [PATCH 7/7] add docs for setting up ollama --- ai.md | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/ai.md b/ai.md index 6640b74..2ef0b8d 100644 --- a/ai.md +++ b/ai.md @@ -46,8 +46,25 @@ to another model to try different options. For further reading, see the documentation of the [litellm Python API](https://docs.litellm.ai/docs/completion), and [litellm providers](https://docs.litellm.ai/docs/providers). -For further reading, see the documentation of the [llm Python API](https://llm.datasette.io/en/stable/python-api.html), -and [llm generally](https://llm.datasette.io/en/stable/index.html). +### Running open source LLMs +To run models like Mixtral or Llama3, you will need to run an [Ollama](https://ollama.com/) server in a separate process. + +1. [Download](https://ollama.com/download) and run Ollama or use the Docker [image](https://hub.docker.com/r/ollama/ollama) +2. Download the model you want to run: + ```shell + ollama pull llama3 + # or with docker + docker exec -it ollama ollama pull llama3 + ``` + See the [documentation](https://docs.litellm.ai/docs/providers/ollama) for the list of supported models. +3. Update your django settings to point to the Ollama server. For example: + ```python + LLM_MODELS = { + "ollama_chat/llama3": {"api_base": "http://localhost:11434"}, + } + DEFAULT_LLM_MODEL = "ollama_chat/llama3" + ``` +4. Restart your Django server. ### The Chat UI