This project is used to encode behavior videos and, depending on the user request, will compress those videos while ensuring that they are formatted for correct display across many devices. This may include common image preprocessing steps, such as gamma encoding, that are necessary for correct display, but have to be done post-hoc for our behavior videos.
This will attempt to compress videos so that the results:
- Retain the majority of the detail of the input video
- Take up as little space as possible for a target amount of visual detail
- Are in a format that can be widely viewed across devices, and streamed by browsers
- Have pixel data in a color space that most players can properly display
This video compression is often lossy, and the original videos are not kept, so this library will attempt to produce the highest-quality video for a target compression ratio. The speed of this compression is strictly secondary to the quality of the compression, as measured by the visual detail retained and the compression ratio. See this section for more details.
Additionally, this package should provide an easy to use interface that:
- Presents users with a curated set of compression settings, which have been rigorously tested in terms of their visual quality using perception-based metrics like VMAF.
- Allow users to also provide their own compression settings, if they have specific requirements
- Sacrifice the visual fidelity of videos in order to decrease encoding time.
A surprising fact is that video encoders implementing the same algorithm, but
written for different compute resources do not have the same visual
performance; for a given compression ratio, or similar settings, they do not
retain the same amount of visual detail. This is also true for different presets
of the same encoder and compute resource even if the other settings are
identical. For example, the presets -preset fast
and -preset veryslow
of the
encoder libx264
produce videos with the same compression ratio, but differing
visual quality.
This can be seen in the plot below, where the GPU encoder and CPU encoders retain different amounts of visual detail, as assessed with visual perception-based metric VMAF. Also note the difference between presets for the same encoder and compute resource: CPU Fast and CPU Slow.
This figure shows that for compression ratios greater than 100, it often makes sense to take your time and use a slow preset of a CPU-based encoder to retain as much visual information for a given amount of compression.
While it may be tempting to select a faster preset, or faster compute resource like GPU for dramatic speedups shown below, doing will degrade the quality of the resulting video.
Because the output of this package are permanent video artifacts, the compression is lossy, and the intent is to delete the original, taking the CPU time to produce the highest quality video possible might well be worth it.
- The BehaviorVideoJob.run_job method in the transform_videos should be the primary method to call for processing video files.
- On a merge to main, this package will be published as a singularity container, which can easily be run on a SLURM cluster.
In the same directory as the Dockerfile, run
docker build -t aind-behavior-video-transformation-local .
Now a docker container can be run. You may need to change the mount locations and file permissions:
docker run -v /home/local_videos/input_source:/mnt/input_source -v /home/local_videos/output_directory:/mnt/output_directory aind-behavior-video-transformation-local python -m aind_behavior_video_transformation.etl --job-settings '{"compression_requested": {"compression_enum": "gamma fix colorspace"}, "parallel_compression": true, "input_source": "/mnt/input_source", "output_directory": "/mnt/output_directory"}'
To develop the code, run
pip install -e .[dev]
There are several libraries used to run linters, check documentation, and run tests.
- Please test your changes using the coverage library, which will run the tests and log a coverage report:
coverage run -m unittest discover && coverage report
- Use interrogate to check that modules, methods, etc. have been documented thoroughly:
interrogate .
- Use flake8 to check that code is up to standards (no unused imports, etc.):
flake8 .
- Use black to automatically format the code into PEP standards:
black .
- Use isort to automatically sort import statements:
isort .
For internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use Angular style for commit messages. Roughly, they should follow the pattern:
<type>(<scope>): <short summary>
where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:
- build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)
- ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)
- docs: Documentation only changes
- feat: A new feature
- fix: A bugfix
- perf: A code change that improves performance
- refactor: A code change that neither fixes a bug nor adds a feature
- test: Adding missing tests or correcting existing tests
The table below, from semantic release, shows which commit message gets you which release type when semantic-release
runs (using the default configuration):
Commit message | Release type |
---|---|
fix(pencil): stop graphite breaking when too much pressure applied |
|
feat(pencil): add 'graphiteWidth' option |
|
perf(pencil): remove graphiteWidth option BREAKING CHANGE: The graphiteWidth option has been removed. The default graphite width of 10mm is always used for performance reasons. |
(Note that the BREAKING CHANGE: token must be in the footer of the commit) |
To generate the rst files source files for documentation, run
sphinx-apidoc -o doc_template/source/ src
Then to create the documentation HTML files, run
sphinx-build -b html doc_template/source/ doc_template/build/html
More info on sphinx installation can be found here.