You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your wonderful work. The paper you published is trained with batch_size16. But when I train on a single 2080Ti with batch_size =2, the performance is better than on four cards. So your batch_size =16 is trained on a single card or on multi-cards. And I submit the result to the Kitti test server, the moderate level is only 80.04 . It confused me for about two months. So can you give a solution? Thank you very much.
Best regards!
The text was updated successfully, but these errors were encountered:
Thank you for your wonderful work. The paper you published is trained with batch_size16. But when I train on a single 2080Ti with batch_size =2, the performance is better than on four cards. So your batch_size =16 is trained on a single card or on multi-cards. And I submit the result to the Kitti test server, the moderate level is only 80.04 . It confused me for about two months. So can you give a solution? Thank you very much.
Best regards!
The text was updated successfully, but these errors were encountered: