Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KiloNeRF CUDA Extension Documentation or Usage Quickstart #2

Open
cameronosmith opened this issue Jun 9, 2021 · 1 comment
Open

Comments

@cameronosmith
Copy link

Hi! You produced some awesome work obviously. For my application I need to render many MLP's with variable sized inputs as you did here. Do you have any instructions for a quickstart or documentation on the kilonerf_cuda extension's usage?

@creiser
Copy link
Owner

creiser commented Jun 16, 2021

Hi! We will work on extending the documentation and making the implementation is less entangled with NeRF. In principle there are only a couple of steps required to transform your MLP into a MultiMLP:

(1) Replace the Linear layers with MultiLinear layers. You'll still have a lot of design flexibility and can use the usual layers for activation functions, etc. (compare MultiNetwork)
(2) Give the forward pass of your network the "batch_size_per_network" argument, which encodes the number of points that are processed by the individual networks (compare MultiNetwork)
(3) Query your network as in "query_multi_network": here reordering of inputs is performed such that subsequent points are processed by the same network. Outputs are backordered.

The above steps are necessary for an implementation that supports efficient training. I recommend to start with efficient training and use that implementation also for rendering at first to check that everything works correctly. The implementation for efficient rendering is more complex and quite domain-specific.

Can you provide some more details? Do you have trouble with the installation? Is it still NeRF or an entirely different context?
If it is still quite close to NeRF, I'd recommend to try to modify this code here and instead of starting from scratch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants