Replies: 2 comments 16 replies
-
Hi there, first of call: cool setup! (1) shouldn't be a problem as long as the camera poses are precise enough for the I suspect the problem comes more from (2) and (3). For (2), we recently (yesterday) pushed support for per-camera metadata. You can customize it via the python bindings testbed.nerf.training.dataset.metadata[image_id].camera_distortion = ...
testbed.nerf.training.dataset.metadata[image_id].focal_length = ...
testbed.nerf.training.dataset.metadata[image_id].principal_point = ... More specifics are in For (3), this will largely manifest as artifacts when trying to view the scene from the top of bottom (i.e. outside of the convex hull of the training data). If you plan for the viewpoint to stay close to the ring of cameras, you should be fine. Curious to hear whether you find the Python bindings w.r.t. (2) helpful. Cheers! |
Beta Was this translation helpful? Give feedback.
-
In my experience, the "smoke" you're describing is most likely loss, which would disappear with more training data/potentially more points extracted with COLMAP. I've tweaked some parameters in my fork, maybe applying the changes from here could help? |
Beta Was this translation helpful? Give feedback.
-
My dataset contains about 100 views, the viewpoints form a ring, all viewpoints are looking towards the circle center.
The cameras are like this:
The scene size is about 15 meters.
The first row shows 5 rendered views of the training view. The second row shows 5 training views.
My parameter:
aabb_scale is set to 16 and I change the scale or offset in the camera positions as https://github.com/NVlabs/instant-ngp/blob/master/docs/nerf_dataset_tips.md, so the scene has been fully covered by the box.
The people in the center of the scene renders fine, but there are quite a few white floating objects or ghosting around the people.
I would like to ask why?
There are three possible reasons I can think of:
Beta Was this translation helpful? Give feedback.
All reactions