Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Circular convolutions and infering back to 3D? #52

Open
JohanBergius opened this issue Apr 29, 2022 · 1 comment
Open

Circular convolutions and infering back to 3D? #52

JohanBergius opened this issue Apr 29, 2022 · 1 comment

Comments

@JohanBergius
Copy link

Hi, Author. Could you please explain or give me directions to where I can better understand the approach of Circular convolutions (ring CNN). I would also like to understand better how your was able to reverse back to the original 3D space after going removing the Z dimension!

@YangZhang4065
Copy link
Collaborator

Hi Johan, sorry about the late reply.
For circular padding, please refer to my latest reply here: #46 (comment)
Reversing back to 3D space is actually quite easy. At the last layer of our neural network, we predict a 2D feature map of size HxWxZ for each scan/ BEV image. H and W is the height and width of the BEV map. Z is the feature dimension which equals to the number of semantic segmentation classes * the number of cells you want to have per each pillar (we used 32, but you can use whatever you want). Then you can reshape it to H x W x num_class x num_voxel_per_pillar and treat this reshaped feature map as semantic segmentation prediction on the 3D voxel level.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants