Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build: adds Dockefile and updates README #18

Merged
merged 1 commit into from
Nov 28, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
build: adds Dockefile and updates README
jtyoung84 committed Nov 12, 2024
commit f0b4edb245f05002b8a7cae16463257631bb3435
10 changes: 10 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
FROM croncorp/python-ffmpeg:3.11.4-slim-bullseye

WORKDIR /app

# Pip install
ADD src ./src
ADD pyproject.toml .
ADD setup.py .

RUN pip install . --no-cache-dir
22 changes: 16 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -44,12 +44,6 @@ Additionally, this package should provide an easy to use interface that:

* Sacrifice the visual fidelity of videos in order to decrease encoding time.

## Usage
- The BehaviorVideoJob.run_job method in the transform_videos should be the
primary method to call for processing video files.
- On a merge to main, this package will be published as a singularity
container, which can easily be run on a SLURM cluster.

## Brief benchmarks on video compression with CPU-based encoders and GPU-based encoders

A surprising fact is that video encoders implementing the same algorithm, but
@@ -84,6 +78,22 @@ Because the output of this package are permanent video artifacts, the
compression is lossy, and the intent is to delete the original, taking the CPU
time to produce the highest quality video possible might well be worth it.

## Usage
- The BehaviorVideoJob.run_job method in the transform_videos should be the
primary method to call for processing video files.
- On a merge to main, this package will be published as a singularity
container, which can easily be run on a SLURM cluster.

## Docker build for local testing
In the same directory as the Dockerfile, run
```bash
docker build -t aind-behavior-video-transformation-local .
```

Now a docker container can be run. You may need to change the mount locations and file permissions:
```bash
docker run -v /home/local_videos/input_source:/mnt/input_source -v /home/local_videos/output_directory:/mnt/output_directory aind-behavior-video-transformation-local python -m aind_behavior_video_transformation.etl --job-settings '{"compression_requested": {"compression_enum": "gamma fix colorspace"}, "parallel_compression": true, "input_source": "/mnt/input_source", "output_directory": "/mnt/output_directory"}'
```

## Development