You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a beginner in point cloud completion, and when using this code to infer other airplane point clouds in Shapenet, I found that the effect was poor, while the original 'airplane.pcd' file in the demo directory worked very well. I checked various details such as normalization and preprocessing, and there were no issues.
Later, I accidentally discovered that my point cloud and the original point cloud had a different direction. To verify that it was a problem with the direction, I also rotated the original airplane in the same direction. As shown in Figure 1, the rotated plane was placed in the same window for visualization.
Why is there such a significant difference in inference performance when only rotating the point cloud 90 degrees? How can I ensure that my point cloud direction is correct during testing to achieve good inference results? Or can random rotation be added during training to enhance robustness? (Sorry, I'm a beginner and I'm not sure if this is feasible.)
The text was updated successfully, but these errors were encountered:
I came into a similar situation. The net can often complete the pc with default poses greatly, while can hardly align them. I tried to train one with such RandomRotate transforms, but the final result is awful. I am not sure if this is affected by the net structure.
Hello, thank you for your great work!
I am a beginner in point cloud completion, and when using this code to infer other airplane point clouds in Shapenet, I found that the effect was poor, while the original 'airplane.pcd' file in the demo directory worked very well. I checked various details such as normalization and preprocessing, and there were no issues.
Later, I accidentally discovered that my point cloud and the original point cloud had a different direction. To verify that it was a problem with the direction, I also rotated the original airplane in the same direction. As shown in Figure 1, the rotated plane was placed in the same window for visualization.
Why is there such a significant difference in inference performance when only rotating the point cloud 90 degrees? How can I ensure that my point cloud direction is correct during testing to achieve good inference results? Or can random rotation be added during training to enhance robustness? (Sorry, I'm a beginner and I'm not sure if this is feasible.)
The text was updated successfully, but these errors were encountered: