-
-
Notifications
You must be signed in to change notification settings - Fork 816
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Create a stash-cuda container with 26.0 HW Transcoding #4914
Comments
The docker compose stuff can probably be added into the existing docker-compose file; commented out so that the user can adjust as needed. I agree with the necessity of additional images. I'm not super confident with the docker side of things, so it's probably not something I can do myself, but I can assist with integration with the builds. |
Related #4300 |
I'm not sure how many users would need the patch for extra transcodes pointed out to them on consumer grade cards, but that would be here if we wanted to add it as part of the dockerfile (if it isn't already) The Note: Version in compose has been deprecated as well. Made this if yall wanna use it.
|
Usually, the way we handle GPUs in compose is using profiles (https://docs.docker.com/compose/profiles/). Let me see if I can get my hands on the hardware needed to test this. |
Interesting way to go about that, never have thought about that. I basically just copied the same config I'd use for torch/plex. |
I'm working towards "universal" hwaccel images in 4300 as mentioned previously. CUDA is a whopping 10G image last I checked and we don't need cuda runtime and development libraries just for hwaccel |
Ahoy there, The great thing about a single binary is that it's super easy to run, so it'll be easy to just use a container. I'm using the I like this approach, as it's the "official" nVidia solution, so people can go argue with nVidia. Plus it's just running a binary. 🤷, and maintaining a custom container is tedious and a time vampire, IMHO. Here's what I'm doing right now to run this in Docker with hwaccel (I was aiming to hopefully expand a little to get it to work elsewhere, like generate, but I digress).
FROM nvidia/cuda:12.4.1-base-ubuntu22.04
RUN apt update && apt upgrade -y && apt install -y ca-certificates libvips-tools ffmpeg wget
RUN rm -rf /var/lib/apt/lists/*
COPY ./stash-linux /usr/bin/stash
RUN mkdir -p /usr/local/bin /patched-lib
RUN wget https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh -O /usr/local/bin/patch.sh
RUN wget https://raw.githubusercontent.com/keylase/nvidia-patch/master/docker-entrypoint.sh -O /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/patch.sh /usr/local/bin/docker-entrypoint.sh /usr/bin/stash
ENV LANG C.UTF-8
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES=video,utility
ENV STASH_CONFIG_FILE=/root/.stash/config.yml
EXPOSE 9999
ENTRYPOINT ["docker-entrypoint.sh", "stash"] And my services:
stash:
build:
context: .
dockerfile: Dockerfile
# this tells docker you have a GPU, without this, it doesn't work.
deploy:
resources:
reservations:
devices:
- capabilities: [gpu] Also, tip around the jankiness of nVidia, Docker and getting the juggling act to work - you can test that Docker is correctly working with nVidia by using this services:
nvidiatest:
image: nvidia/cuda:12.4.1-base-ubuntu22.04
command: nvidia-smi
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu] Then when you run $ docker-compose up
[+] Running 1/0
✔ Container nvidia-test-1 Created 0.0s
Attaching to test-1
test-1 | Mon Jun 3 10:54:46 2024
test-1 | +-----------------------------------------------------------------------------------------+
test-1 | | NVIDIA-SMI 555.42.02 Driver Version: 555.42.02 CUDA Version: 12.5 |
test-1 | |-----------------------------------------+------------------------+----------------------+ |
I did attempt this once, anyone willing can try again :P |
For now, CUDA users seemingly need to clone the repo, run the make commands, and use a local docker container. This is totally expected in a normal production env for a sane sysadmin.
However, I propose a second docker container/tag that is cuda specific so it can be used accordingly for the lazy/watchtower users.
Something like stash-cuda:development, stash:v0.26.0-cuda, or stash:development-cuda. Also, I would accordingly propose a docker-compose.yml for the cuda build as well. Something like:
May also be worth linking to the nvidia container toolkit.
PS: If I am just missing something that already has this would love to be pointed that way. Also not sure how much of the nv stuff is needed in the compose
The text was updated successfully, but these errors were encountered: