Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate depth image ? #28

Open
Gardlin opened this issue Feb 27, 2022 · 14 comments
Open

Generate depth image ? #28

Gardlin opened this issue Feb 27, 2022 · 14 comments

Comments

@Gardlin
Copy link

Gardlin commented Feb 27, 2022

Hello , Thanks for your great work.

However ,I don't know how to generate depth image for my own data.Have you provided the code to generate depth image? How do you generate the depth map, render from the mesh.ply in the scannet ?

@weiyithu
Copy link
Owner

The depth maps are only for evaluation. When you run your own data, you do not need them.

@Gardlin
Copy link
Author

Gardlin commented Mar 13, 2022

Thanks for your reply.

By the way ,I'm wondering why the sc get from load_llff_data need to multiply bds.min()

sc = 1. if bd_factor is None else 1./(bds.min() * bd_factor)

I think it only corresponds to the initial image resolution and the image resolution sent into nerf network . And it's also different from the oringinal Nerf repo.

@weiyithu
Copy link
Owner

Oh, here is the same with original NeRF repo https://github.com/bmild/nerf/blob/18b8aebda6700ed659cb27a0c348b737a5f6ab60/load_llff.py#L257. I think it means to use bounds to normalize 3D space.

@Gardlin
Copy link
Author

Gardlin commented Mar 13, 2022

Thanks a lot, Maybe I accidently changed my code .

By the way , I'm have no idea about what is the exact meaning of bds. Does it represents the min and the max depth of the scene ? I try to google for the anwser ,but I haven't found it .It will be very greateful for your response.

@weiyithu
Copy link
Owner

Indeed, I do not specifically check this variable. I guess it means the min and max bounds of the scene.

@Pattern6
Copy link

Hello! I want to know do you only use 3D coordinates instead of 5D coordinates? because in your run.sh I can't see use_viewdirs

your run.sh are as follow:
SCENE=$1
sh colmap.sh data/$SCENE
python run.py --config configs/$SCENE.txt --no_ndc --spherify --lindisp --expname=$SCENE

@weiyithu
Copy link
Owner

weiyithu commented Apr 1, 2022

Same to NeRF, we use 5D coordinates. use_viewdirs parameter is written in configs/$SCENE.txt.

@Pattern6
Copy link

Pattern6 commented Apr 2, 2022

Same to NeRF, we use 5D coordinates. use_viewdirs parameter is written in configs/$SCENE.txt.

Ok, Thanks! I want to know what this function "def cal_neighbor_idx(poses, i_train, i_test)" do? I want to use my dataset to make a test, but I don't know what this function do. So I hope you can reply me. Thanks again!

@Gardlin
Copy link
Author

Gardlin commented Apr 2, 2022

I think cal_neighbor_idx is used to calculate the closest traing poses with each testing poses ,It is used to supply depth prior for testing/unseen views.
However, I have tried to use warped depth prior from the closest training views to testing views which I think provide a better depth prior for most pixels , But the rendering result seems much worse than the direct closest depth prior ,have you done some relevant experiments or some ideas about the result.

@Pattern6
Copy link

Pattern6 commented Apr 2, 2022

My results were also bad.But when we do the test, we do not need the depth of the test set, The parameters we need are as follows:
rgbs, disps, depths = render_path(render_poses, hwf, args.chunk, render_kwargs_test, sc=sc,
savedir=testsavedir, render_factor=args.render_factor,
image_list=image_list)

@Pattern6
Copy link

Pattern6 commented Apr 2, 2022

I think cal_neighbor_idx is used to calculate the closest traing poses with each testing poses ,It is used to supply depth prior for testing/unseen views. However, I have tried to use warped depth prior from the closest training views to testing views which I think provide a better depth prior for most pixels , But the rendering result seems much worse than the direct closest depth prior ,have you done some relevant experiments or some ideas about the result.

My results were also bad.But when we do the test, we do not need the depth of the test set, The parameters we need are as follows:
rgbs, disps, depths = render_path(render_poses, hwf, args.chunk, render_kwargs_test, sc=sc,
savedir=testsavedir, render_factor=args.render_factor,
image_list=image_list)

@weiyithu
Copy link
Owner

weiyithu commented Apr 2, 2022

I think cal_neighbor_idx is used to calculate the closest traing poses with each testing poses ,It is used to supply depth prior for testing/unseen views. However, I have tried to use warped depth prior from the closest training views to testing views which I think provide a better depth prior for most pixels , But the rendering result seems much worse than the direct closest depth prior ,have you done some relevant experiments or some ideas about the result.

I think this is may caused by : If you want to get the depth prior of i view by projecting j view depth prior to i view, you may get many holes since not all pixels in i view have the correspondence in j view. I admit the current solution is just a approximation. You may try other methods and if you succeed please feel free to let me know~

@weiyithu
Copy link
Owner

weiyithu commented Apr 2, 2022

Same to NeRF, we use 5D coordinates. use_viewdirs parameter is written in configs/$SCENE.txt.

Ok, Thanks! I want to know what this function "def cal_neighbor_idx(poses, i_train, i_test)" do? I want to use my dataset to make a test, but I don't know what this function do. So I hope you can reply me. Thanks again!

Since we cannot get depth prior for the unseen view, we directly use the closest neighboring view's depth prior. And cal_neighbor_idx is to find this neighboring view.

@Gardlin
Copy link
Author

Gardlin commented Apr 3, 2022

I think cal_neighbor_idx is used to calculate the closest traing poses with each testing poses ,It is used to supply depth prior for testing/unseen views. However, I have tried to use warped depth prior from the closest training views to testing views which I think provide a better depth prior for most pixels , But the rendering result seems much worse than the direct closest depth prior ,have you done some relevant experiments or some ideas about the result.

I think this is may caused by : If you want to get the depth prior of i view by projecting j view depth prior to i view, you may get many holes since not all pixels in i view have the correspondence in j view. I admit the current solution is just a approximation. You may try other methods and if you succeed please feel free to let me know~

I used bilinear interplotion to implement the warping, I think there is less than 5% of the pixel depth is invalid , I warp the target view depth prior after "depth_prior=depth_priorscration_priors" , which I think the warped depth_prior comes to the equal scale with the source view.
image

But the testing rendering result seems get worse

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants