You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I read the code, I was very confused about the process of defining the point cloud casting rays. I found that you set the origin of all rays in the ego origin. And I also know that the merged point cloud of nuPlan is indeed in the ego coordinate. Nevertheless, these points are obtained in themselves from the LiDAR origin of each. So when we cast rays from the ego coordinate, does it cause occlusion problems?
For example, some far and low objects can be seen from the top LiDAR, but would be occluded by the nearer objects from the viewpoint of ego origin.
And the problem seems clear when I visualize the point cloud ground truth and the predicted point cloud.
The points in yellow are predicted by the ViDAR pre-trained model.
The text was updated successfully, but these errors were encountered:
We did not consider this when CVPR 2024 submission, due to at that time, we mainly considered the downstream performance.
This is actually a big issue in the current codebase for rendering point cloud, due to the miss aligned ego pose and lidar sensor pose. However, to align with our CVPR version, we decide not to modify this when publishing the code.
During training or testing on OpenScene, you can manually adjust the origin of rendering to something like [0, 0, 1.6m] instead of [0, 0, 0] (as the original code) to avoid this issue. But I am not sure how to deal with it for private_test, maybe some alignment of the origin (since in private_test, the rays also start from the origin)?
Hi, thanks for sharing this wonderful work.
When I read the code, I was very confused about the process of defining the point cloud casting rays. I found that you set the origin of all rays in the ego origin. And I also know that the merged point cloud of nuPlan is indeed in the ego coordinate. Nevertheless, these points are obtained in themselves from the LiDAR origin of each. So when we cast rays from the ego coordinate, does it cause occlusion problems?
For example, some far and low objects can be seen from the top LiDAR, but would be occluded by the nearer objects from the viewpoint of ego origin.
And the problem seems clear when I visualize the point cloud ground truth and the predicted point cloud.
The text was updated successfully, but these errors were encountered: