diff --git a/DOCUMENTATION.md b/DOCUMENTATION.md index b8eb6273..857a6d79 100644 --- a/DOCUMENTATION.md +++ b/DOCUMENTATION.md @@ -1,8 +1,8 @@ # JSON Configuration Documentation -This documentation so far only contains the JSON parameters for configuring each component of __tiny-cuda-nn__. +This document lists the JSON parameters of all components of __tiny-cuda-nn__. -For each component, we provide a sample configuration with each parameter's default value. +For each component, we provide a sample configuration that lists each parameter's default value. ## Networks @@ -27,7 +27,7 @@ The following activation functions are supported: ### Fully Fused MLP -Lightning fast implementation of small multi-layer perceptrons (MLPs). Restricted to hidden layers of size 32, 64, or 128 and outputs of 16 or fewer dimensions. +Lightning fast implementation of small multi-layer perceptrons (MLPs). Restricted to hidden layers of size 32, 64, 128, or 256. ```json5 { @@ -236,7 +236,7 @@ Relative L2 loss normalized by the network prediction [[Lehtinen et al. 2018]](h ### Relative L2 Luminance -Same as above, but normalized by the luminance of the network prediction. Only applicable when network prediction is RGB. Used in Neural Radiance Caching [Müller et al. 2021] (to appear). +Same as above, but normalized by the luminance of the network prediction. Only applicable when network prediction is RGB. Used in Neural Radiance Caching [[Müller et al. 2021]](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.pdf). ```json5 { diff --git a/README.md b/README.md index 40e4bc04..9f08b26e 100644 --- a/README.md +++ b/README.md @@ -37,7 +37,7 @@ This framework powers the following publications: > [ [Paper](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.pdf) ] [ [GTC talk](https://gtc21.event.nvidia.com/media/Fully%20Fused%20Neural%20Network%20for%20Radiance%20Caching%20in%20Real%20Time%20Rendering%20%5BE31307%5D/1_liqy6k1c) ] [ [Video](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.mp4) ] [ [Interactive Results Viewer](https://tom94.net/data/publications/mueller21realtime/interactive-viewer/) ] [ [BibTeX](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.bib) ] > __Extracting Triangular 3D Models, Materials, and Lighting From Images__ -> [Jakob Munkberg](https://research.nvidia.com/person/jacob-munkberg), [Jon Hasselgren](https://research.nvidia.com/person/jon-hasselgren), [Tianchang Shen](http://www.cs.toronto.edu/~shenti11/), [Jun Gao](http://www.cs.toronto.edu/~jungao/), [Wenzheng Chen](http://www.cs.toronto.edu/~wenzheng/), [Alex Evans](https://research.nvidia.com/person/alex-evans), [Thomas Müller](https://tom94.net), [Sanja Fidler](https://www.cs.toronto.edu/~fidler/) +> [Jacob Munkberg](https://research.nvidia.com/person/jacob-munkberg), [Jon Hasselgren](https://research.nvidia.com/person/jon-hasselgren), [Tianchang Shen](http://www.cs.toronto.edu/~shenti11/), [Jun Gao](http://www.cs.toronto.edu/~jungao/), [Wenzheng Chen](http://www.cs.toronto.edu/~wenzheng/), [Alex Evans](https://research.nvidia.com/person/alex-evans), [Thomas Müller](https://tom94.net), [Sanja Fidler](https://www.cs.toronto.edu/~fidler/) > _[arXiv:2111.12503 [cs.CV]](https://arxiv.org/abs/2111.12503)_, Nov 2021 > > [ [Website](https://nvlabs.github.io/nvdiffrec/) ] [ [Paper](https://nvlabs.github.io/nvdiffrec/assets/paper.pdf) ] [ [Video](https://nvlabs.github.io/nvdiffrec/assets/video.mp4) ] [ [BibTeX](https://nvlabs.github.io/nvdiffrec/assets/bib.txt) ] @@ -125,26 +125,24 @@ producing an image every 1000 training steps. Each 1000 steps should take roughl Begin by cloning this repository and all its submodules using the following command: ```sh -> git clone --recursive https://github.com/nvlabs/tiny-cuda-nn -> cd tiny-cuda-nn -tiny-cuda-nn> +$ git clone --recursive https://github.com/nvlabs/tiny-cuda-nn +$ cd tiny-cuda-nn ``` Then, use CMake to generate build files: ```sh -tiny-cuda-nn> mkdir build -tiny-cuda-nn> cd build -tiny-cuda-nn/build> cmake .. +tiny-cuda-nn$ mkdir build +tiny-cuda-nn$ cd build +tiny-cuda-nn/build$ cmake .. ``` -Then, depending on your operating system - -On Windows, open `tiny-cuda-nn/build/tiny-cuda-nn.sln` in Visual Studio and click the "Build" button. -On Linux you can compile with -```sh -tiny-cuda-nn/build> make -j -``` +The last step differs by operating system. +- Windows: open `tiny-cuda-nn/build/tiny-cuda-nn.sln` in Visual Studio and click the "Build" button. +- Linux: run the command + ```sh + tiny-cuda-nn/build$ make -j + ``` ## Components