Based on https://github.com/opencv/opencv-python.
- For now supports only:
- X64
- Linux
- QT when using
highgui
module -
[!NOTE]
It seems that OpenCV does not support OpenGL when building highgui for GTK3:
- QT when using
- Requires cuDNN 8 (OpenCV 4.9.0 does not support cuDNN 9)
Download Nvidia Video Codec SDK 12.1 and unpack it to infra/image/build/deps/nvidia-codec-sdk
. Move everything from inside the directory to the nvidia-codec-sdk
dir.
Since the build might take some time and the is a possibility that it fail, it is more reliable to run it interactively in a container. When it is finished, the build artifacts can be copied to an attached volume to be accessed outside of the build container.
Create directory to share opencv build artifacts between the build container and host:
mkdir -p ./infra/opencv-build
Run the build container using compose.build.yaml
file:
sudo nerdctl compose -f="./infra/compose.build.yaml" up -d
Warning
With this command, the container may not be able to see nvidia GPU for some reason. Be sure to check sudo nerdctl logs opencv-python-build
.
If it says:
WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available.
Use the NVIDIA Container Toolkit to start this container with GPU support; see
https://docs.nvidia.com/datacenter/cloud-native/ .
Then the gpu has not been detected. In this case, bring the container down:
sudo nerdctl compose -f="./infra/compose.build.yaml" down
And run the container manually (thankfully, the image has already been built so it won't be rebuilt):
sudo nerdctl run -dt --gpus="all" --name="opencv-python-build" --volume="${PWD}/infra/image/deps/opencv-build:/home/opencv/build" --volume="${PWD}/infra/image/build/scripts:/home/opencv/scripts" opencv-python-build:latest
Add
--build
afterup
if you changed theConainerfile
and want to rebuild the image.
This will start a build container. The container has 2 attached volumes:
./opencv-build:/home/opencv/build
- after the build is finished you can copy the outputs to/home/opencv/build
inside the container and they will appear ininfra/opencv-build
directory. This directory is also excluded from git../build/scripts:/home/opencv/scripts
- contains scripts to facilitate build activities. E.g.build-opencv-python.sh
will set build flags and start the build. You can change this script, if you want to compile opencv with different flags.
After the build container has been started, you can login to the container:
sudo nerdctl exec -it opencv-python-build /bin/bash
Inside the build container run build with:
~/scripts/build-opencv-python.sh
Copy the built wheel:
cp opencv_*.whl ~/build/
Or, you can copy the whole build directory (it can reach 10Gb, so it's better to copy only wheel, because this is what is actually going to be used to install OpenCV):
cp -R ./ ~/build/
By default, the Jupyter image uses Iosevka Nerd, Iosevka Aile, and Iosevka Etoile fonts, so they need to be installed if you want the default config to work well out of the box. Alternatively, you can customize infra/image/jupyter/settings/overrides/overrides.json5
to use the font of your preference.
When you have built OpenCV with CUDA, build jupyter container with OpenCV installed:
sudo nerdctl compose up --build
When jupyter container is built, OpenCV CUDA build will be installed in the conda environment from that wheel. That is you can run import cv2
.
After you built a jupyter image with CUDA OpenCV, you can start new container with the following command:
sudo nerdctl run -d \
--gpus="all" \
--name="opencv-projects" \
\
--user=root \
\
--env NB_USER="opencv" \
--env CHOWN_HOME="yes" \
--env TERM="linux" \
--env DISPLAY="${DISPLAY}" \
\
--workdir="/home/opencv" \
\
-p="8888:8888" \
\
--volume="${PWD}/infra/image/jupyter/settings/overrides:/opt/conda/share/jupyter/lab/settings" \
--volume="${PWD}/infra/image/jupyter/settings/config:/usr/local/etc/jupyter" \
--volume="${PWD}/infra/image/jupyter/scripts:/home/opencv/scripts" \
--volume="${PWD}/src:/home/opencv/workdir" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix" \
\
opencv-projects:latest \
\
start-notebook.py \
--ServerApp.root_dir=/home/opencv/workdir \
--IdentityProvider.token='opencv'
Note
When running the container with nerdcrl run
, make sure that the container with the same name (opencv-projects
) does not already exist. If it does, you can sudo nerdctl container rm -f opencv-projects
to forcefully remove it.
To pass webcam, add --device="/dev/video0"
to the command above.
Note
Usually, on a linux system connected cameras are devices under /dev
directory named video
n
, where n
is a number of video device. If you have 1 camera connected to your computer, the webcam will most probably be accessible as /dev/video0
.
Additionally, the container user must have permissions to access the device file. By default, the container is set up to add the notebook user to the video
group, which should be sufficient. If the user does not have access to the device file, an error saying Camera Index out of range
will occur.
- TODO: Local X server forwarding:
DISPLAY
env is used by X clients (GUI programs that want to draw window on a screen) to determine which X server should be used to draw the GUI window/tmp/.X11-unix/X
n
is X's socket file. X clients communicate with X server via this file, so if a client program in a container needs to display something on X server that is outside the container, this directory must be passed as a volume to the container.- Local X server must be configured to accept connections from the container user via
xhost +si:localgroup:opencv
?- or should GID be used:
xhost +si:localgroup:#GID
- the access is checked by the local X server by user ids
- or should GID be used: