Skip to content

ZhangkaiwenChu/Neural-Radiance-Fields-with-Refractions

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Raidance Fields with Refractions

University of Pennsylvania, CIS 565: GPU Programming and Architecture, Final Project

  • Zhangkaiwen Chu
  • Tested on: Windows 10, R7-5800H @ 3.20GHz 16GB, RTX 3070 Laptop GPU 16310MB (Personal Laptop)

This project implement the light-bend neural radiance field (LB-NeRF) which is based on the paper "LB-NERF: Light Bending Neural Radiance Fields for Transparent Medium". It aims at modeling the refraction effect with an offset field using a neural network.

Background

Neural radiance field(NeRF) is a 5D volumetric representation(3D for position, 2D for view angles) of a scene. By numeric intergration through the light ray, the pixel color can be recovered. However, NeRF assumes light travels a straight line. Thus, NeRF is bad at rendering scenes including specular or refractive effects. Since the bending of light in the field of index of refraction is equivalent to applying an offset field to the NeRF, we can use a neural network to learn the offset field, which can model the bending the light without including recurrent architecture in the network.

Model Architecture

We first pass the position and direction to the offset network, then we add the position with the offset and pass it to the encoder. Next we pass the encoded position to the density network. We use the first output of the density network as the density, and the whole output s concatenated with the encoded direction and passed to the RGB network. The output is the RGB value.

Image Quality

We use Multiscale structural similarity (MS-SSIM) as the metric to evaluate the image quality.

Reference NeRF LB-NeRF
msssim = 1 msssim = 0.2875 msssim = 0.3001

Note that since there is a slightly offset in the rendered image, the msssim is very low. We should compare cropped image to get reasonable results.

Reference NeRF LB-NeRF
msssim = 1 msssim = 0.5163 msssim = 0.5908

For the glass ball, the image generated by LB-NeRF has a higher msssim, which proves that adding the offset network to NeRF can improve the ability to render transparent objects.

Reference NeRF LB-NeRF
msssim = 1 msssim = 0.6298 msssim = 0.5537

However, for parts do not includ transparent objects, LB-NeRF has a worse performance. Since the offset field is strongly dependent to the view angle, it makes it hard to learn exactly zero for areas that do not influenced by refraction.

C++ Implementation

For C++ implementation, I based my code on instant-npg. I only modified the network architecture, which locates at instant-ngp/include/neural-graphics-primitives/nerf_network.h. However, due to a different choice of encoding, the model do not converge, and it cannot output reasonable images.

Usage

The pytorch implemententation locates in the code direcctory, ended with .ipynb. INSTANT-NeRF.ipynb is a code that mainly build from scratch, and IB-NeRF.ipynb and NeRF.ipynb are mainly based on pytorch-nerf. For c++ implementation, we are using the architecture of instant-npg, please refer to this page for details.

Reference

instant-ngp

tiny-cuda-nn

IB-NeRF

pytorch-nerf

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published