Replies: 3 comments 3 replies
-
For reasons the UI pipeline installer from URL does not run the installation of dependencies. You should use the envvar pipelines:
image: ghcr.io/open-webui/pipelines:main
deploy:
resources:
limits:
memory: 4g
cpus: '2.0'
reservations:
devices:
- capabilities: [gpu]
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,utility
- PIPELINES_URLS="https://raw.githubusercontent.com/open-webui/pipelines/main/examples/pipelines/rag/llamaindex_ollama_github_pipeline.py"
volumes:
- pipelines:/app/pipelines
ports:
- "9099:9099"
extra_hosts:
- "host.docker.internal:host-gateway"
restart: always
runtime: nvidia |
Beta Was this translation helpful? Give feedback.
-
It might also be necessary to delete your |
Beta Was this translation helpful? Give feedback.
-
I changed my dockerfile to download the dependencies that weren't loading right to the end of the file and it fixed all that little stuff, as well as the ./data file, I also added that to the end of my dockerfile
|
Beta Was this translation helpful? Give feedback.
-
When I try and Install a pipline in open-webui from a Github URL I get the error
No module named 'llama_index'
.Its interesting to note that when I try and add the pipeline docker image to a docker compose file I recieve the same error during the build and the container will not start. Here is the log output.
If I run the docker compose with just ollama and open-webui images the compose starts fine. I can then run the pipelines as an individial container and runs. I can add the pipelines containter address to the OpenAPI URL and it sees it..
Any thoughts?
Here is the compose file that does not work
Beta Was this translation helpful? Give feedback.
All reactions