-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--no-crop adds some noise or issues to the bottom of the images. #5
Comments
Hi, @gateway ,I had tested the Sample.jpg but got all black result. |
Hmm.. so with equirectangular this will cause noise due to the lack of data to extract due to the nature of how equirectangular works? There has been a lot of talk in the 360 virtual tour market about using depth maps to create some unique points of view not easily obtained by a single 360 shot. See https://krpano.com/examples/?depthmap Here is a shared google folder of 3 images I took for a tour https://drive.google.com/drive/folders/1bB5TooUagnQrVUYMBoWVrFiuas0WnHoT?usp=sharing and here is the tour showcasing my buddies art work these belong to. https://ibareitall.com/360/tonic-arts/ maybe this is not going to work but I have been researching the best way to create these depth maps and someone pointed me to your paper. |
hi, @fuenwang ,i think there`s something wrong with my gpu card. |
@gateway Because the vertical FoV of depth sensors in Matterport camera cannot reach 180 degree, so the upper/lower area are invalid pixels. During training, we won't calculate the loss on these regions. So we can say this area is totally undefined for our model and eventually predicts noisy value. However, in virtual environment like PanoSUNCG, we can easily create a spherical depth sensor with vertical FoV as 180 degree so this problem won't happen in such case. @dex1990 I have upload the point cloud visualization tool in tools/ folder, you can have a try. |
Thanks for your feedback, I'm just wondering if this is the right options for equirectangular images that can then be used as a depth map for 360 virtual tours. Cropping the top and bottom (removing the noise) may have some effect on this or make it look odd at the top (above the person and below the person). Question: I'm not 100% sure how the script works and the math is above my head but if you created a virtual 360 camera and wrapped it in the equirectangular projection (a 360 view) couldn't the code estimate the tops and bottoms a bit more. I realize you have a trained matterport dataset but what about other datasets or datasets taken from simple 360 cameras in video such as the insta360 One r , One x or other.. |
w/o --nocrop
with --nocrop
Message when running this on Ubuntu 18.04
The text was updated successfully, but these errors were encountered: