You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
Thank you for sharing your valuable work! I'm currently trying to reproduce your results, but I've got some problems during dataset preparation.
"For 189K training data + the rest of open loop validation data = 400K frames in TCP, please collect data with .xmls in leaderboard/data/routes_for_open_loop_training with suffix 00, 01, 02, val. During open-loop running, we train on towns 01, 03, 04, 06 and validate on towns 02, 05 as in LAV, Transfuser, TCP."
After collecting the data from towns 01, 03, 04, and 06 (suffix 00, 01, 02, val) and running generate_metadata.py, the total frame count is 300,341. I understand some routes might failed or got blocked, but this doesn't align with the expected 189K frames. What may have gone wrong?
Additionally, I collected data for towns 01~06 (suffix 00-02 + val), which should be 400K frames, but the total dataset size is 6TB, which is still not close to the 8TB mentioned for the 189K frames.
Did I miss something during data collection, or might I have misunderstood the data requirements?
Any insights you can provide would be greatly appreciated!
Thanks for your time.
The text was updated successfully, but these errors were encountered:
Hello,
Thank you for sharing your valuable work! I'm currently trying to reproduce your results, but I've got some problems during dataset preparation.
I followed the instructions in DATA_PREP:
After collecting the data from towns 01, 03, 04, and 06 (suffix 00, 01, 02, val) and running generate_metadata.py, the total frame count is 300,341. I understand some routes might failed or got blocked, but this doesn't align with the expected 189K frames. What may have gone wrong?
Additionally, I collected data for towns 01~06 (suffix 00-02 + val), which should be 400K frames, but the total dataset size is 6TB, which is still not close to the 8TB mentioned for the 189K frames.
Did I miss something during data collection, or might I have misunderstood the data requirements?
Any insights you can provide would be greatly appreciated!
Thanks for your time.
The text was updated successfully, but these errors were encountered: