This repo contains a PyTorch implementation of the paper Neural 3D Mesh Renderer by Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. It is a port of the original Chainer implementation released by the authors. Currently the API is the same as in the original implementation with some smalls additions (e.g. render using a general 3x4 camera matrix, lens distortion coefficients etc.). However it is possible that it will change in the future.
The library is fully functional and it passes all the test cases supplied by the authors of the original library. Detailed documentation will be added in the near future.
Python 2.7+ and PyTorch 0.4.0.
The code has been tested only with PyTorch 0.4.0, there are no guarantees that it is compatible with older versions. Currently the library has both Python 3 and Python 2 support.
Note: In some newer PyTorch versions you might see some compilation errors involving AT_ASSERT. In these cases you can use the version of the code that is in the branch at_assert_fix. These changes will be merged into master in the near future.
You can install the package by running
pip install neural_renderer_pytorch
Since running install.py requires PyTorch, make sure to install PyTorch before running the above command.
python ./examples/example1.py
python ./examples/example2.py
python ./examples/example3.py
python ./examples/example4.py
Transforming the silhouette of a teapot into a rectangle. The loss function is the difference between the rendered image and the reference image.
Reference image, optimization, and the result.
Matching the color of a teapot with a reference image.
Reference image, result.
The derivative of images with respect to camera pose can be computed through this renderer. In this example the position of the camera is optimized by gradient descent.
From left to right: reference image, initial state, and optimization process.
@InProceedings{kato2018renderer
title={Neural 3D Mesh Renderer},
author={Kato, Hiroharu and Ushiku, Yoshitaka and Harada, Tatsuya},
booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018}
}