What determined the actual size of the reconstructed scene? #1157
-
Hi, I was using calibrated ground truth camera parameters to train a scene, i.e., I am able to visualize the all camera in transforms.json and it gave me the correct scale in real world. However, I found that the reconstructed scene is not the same size as the real world. For example, if I add from camera I will find that starting from feet of the standing man to the head of him, the y is ranging from 0.7m to 1.3m, which is super weird. Basically the scale is not the same as the real world. Is there a way I can find the scale? I'm thinking I can export all the preset cameras' parameters from the reconstructed scene and then do similarity transform but that sounded a little bit extra work. Is there a more fundamental way of determining the scale? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi there, adding The default scaling is 0.33x (which explains the 0.6m human size) for compatibility with the original NeRF paper's synthetic scenes, which otherwise don't fit into the unit cube. Silly reason, I know, but it's too late to change without breaking everyone's datasets now. |
Beta Was this translation helpful? Give feedback.
Hi there, adding
"scale": 1.0,
to thetransforms.json
file should fix it.The default scaling is 0.33x (which explains the 0.6m human size) for compatibility with the original NeRF paper's synthetic scenes, which otherwise don't fit into the unit cube. Silly reason, I know, but it's too late to change without breaking everyone's datasets now.