You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi~ author, I tried to collect carla dataset with my own format by autopilot, and train in open-loop manner, then test my own
NN model in carla. But recently I notice it can't learn to stop in front of red light.
In my carla camera sensor setup, I use six 640*320 image with 90 degree FOV for all six views as model input. But I notice in STP3 model, in addition to four-view raw image, a separate front-view image feature is being input to GRU refinement module.
So I wonder whether or not such method is helpful in carla close-loop test?
Is there some ablation results? (it seems your paper only show such abalation experiment in open-loop nuscenes dataset?)
And I think if it's really helpful in close-loop test, maybe I can try such method in my model design? (i.e. use separate network to learn front-view image feature)
I'll thanks much for your reply ~~
The text was updated successfully, but these errors were encountered:
@EcustBoy Thank you for your question. Yes, it is of great help in closed-loop test. Sorry that we do not have the record for that ablation.
Thanks for reply~ Dr. chen,so it seems you have done some qualitative evaluation by video record, right? maybe you can provide some qualitative conclusion?
For example, I would like to know if the behavior of stopping at a red light mainly depends on front-view feature refinement design? And if such module is removed, will the model loss the ability to stop at a red light?
I think your conclusion will be helpful for my experiment~ I'll thanks much if you can provide some information
Hi~ author, I tried to collect carla dataset with my own format by autopilot, and train in open-loop manner, then test my own
NN model in carla. But recently I notice it can't learn to stop in front of red light.
In my carla camera sensor setup, I use six 640*320 image with 90 degree FOV for all six views as model input. But I notice in STP3 model, in addition to four-view raw image, a separate front-view image feature is being input to GRU refinement module.
So I wonder whether or not such method is helpful in carla close-loop test?
Is there some ablation results? (it seems your paper only show such abalation experiment in open-loop nuscenes dataset?)
And I think if it's really helpful in close-loop test, maybe I can try such method in my model design? (i.e. use separate network to learn front-view image feature)
I'll thanks much for your reply ~~
The text was updated successfully, but these errors were encountered: