- August 2023 (image compression) - Released PyTorch implementation of MS-ILLM
- April 2023 (video compression) - Released PyTorch implementation of VCT
- November 2022 (image compression) - Released Bits-Back coding with diffusion models!
NeuralCompression is a Python repository dedicated to research of neural networks that compress data. The repository includes tools such as JAX-based entropy coders, image compression models, video compression models, and metrics for image and video evaluation.
NeuralCompression is alpha software. The project is under active development. The API will change as we make releases, potentially breaking backwards compatibility.
NeuralCompression is a project currently under development. You can install the repository in development mode.
First, install PyTorch according to the directions from the PyTorch website. Then, you should be able to run
pip install neuralcompression
to get the latest version from PyPI.
First, clone the repository and navigate to the NeuralCompression root directory and install the package in development mode by running:
pip install --editable ".[tests]"
If you are not interested in matching the test environment, then you can just
apply pip install -e .
.
We use a 2-tier repository structure. The neuralcompression
package contains
a core set of tools for doing neural compression research. Code committed to
the core package requires stricter linting, high code quality, and rigorous
review. The projects
folder contains code for reproducing papers and training
baselines. Code in this folder is not linted aggressively, we don't enforce
type annotations, and it's okay to omit unit tests.
The 2-tier structure enables rapid iteration and reproduction via code in
projects
that is built on a backbone of high-quality code in
neuralcompression
.
neuralcompression
- base packagedata
- PyTorch data loaders for various data setsdistributions
- extensions of probability models for compressionfunctional
- methods for image warping, information cost, flop counting, etc.layers
- building blocks for compression modelsmetrics
-torchmetrics
classes for assessing model performancemodels
- complete compression modelsoptim
- useful optimization utilities
projects
- recipes and code for reproducing papersbits_back_diffusion
- code for bits-back coding with diffusion modelsdeep_video_compression
DVC (Lu et al., 2019), deprecatedillm
MS-ILLM (Muckley et al., 2023)jax_entropy_coders
- implementations of arithmetic coding and ANS in JAXtorch_vct
VCT (Mentzer, et al.,)
This repository also features interactive notebooks detailing different
parts of the package, which can be found in the tutorials
directory.
Existing tutorials are:
- Walkthrough of the
neuralcompression
flop counter (view on Colab). - Using
neuralcompression.metrics
andtorchmetrics
to calculate rate-distortion curves (view on Colab).
Please read our CONTRIBUTING guide and our CODE_OF_CONDUCT prior to submitting a pull request.
We test all pull requests. We rely on this for reviews, so please make sure any
new code is tested. Tests for neuralcompression
go in the tests
folder in
the root of the repository. Tests for individual projects go in those projects'
own tests
folder.
We use black
for formatting, isort
for import sorting, flake8
for
linting, and mypy
for type checking.
NeuralCompression is MIT licensed, as found in the LICENSE file.
Model weights released with NeuralCompression are CC-BY-NC 4.0 licensed, as found in the WEIGHTS_LICENSE file.
Some of the code may from other repositories and include other licenses. Please read all code files carefully for details.
If you use code for a paper reimplementation. If you would like to also cite the repository, you can use:
@misc{muckley2021neuralcompression,
author={Matthew Muckley and Jordan Juravsky and Daniel Severo and Mannat Singh and Quentin Duval and Karen Ullrich},
title={NeuralCompression},
howpublished={\url{https://github.com/facebookresearch/NeuralCompression}},
year={2021}
}