You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
I'm working in a saturated light environment, thus the RGB isn't of any help. My dataset is synthetic and relatively small (~500 scenes with 3D rotation).
To this day, I'm investigating the use of Mask RCNN on depth images and the use of Votenet on point clouds. Both give interesting results and have their own strengths and weaknesses. The 2D detectors allow us to use Transfer Learning efficiently but the depth map is only a single channel (8 bits) image. On the other hand, Votenet keeps the exact position of each point and has a more precise geometry.
I wonder if it's possible to, instead of using the RGB images, use the depth map to train ImVotenet ? Do you think that such a combination could work ?
Thanks !
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hello,
I'm working in a saturated light environment, thus the RGB isn't of any help. My dataset is synthetic and relatively small (~500 scenes with 3D rotation).
To this day, I'm investigating the use of Mask RCNN on depth images and the use of Votenet on point clouds. Both give interesting results and have their own strengths and weaknesses. The 2D detectors allow us to use Transfer Learning efficiently but the depth map is only a single channel (8 bits) image. On the other hand, Votenet keeps the exact position of each point and has a more precise geometry.
I wonder if it's possible to, instead of using the RGB images, use the depth map to train ImVotenet ? Do you think that such a combination could work ?
Thanks !
The text was updated successfully, but these errors were encountered: