-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about reproduce #17
Comments
Hi, I will test it again on my machine to double-check the numbers. Can you send me more details about how you run it in the meantime? The .hydra/config.yaml may help. UPDATE: i rerun the evaluation with the uploaded ckpts. I got the following results in three eval runs:
|
So sorry for later replay, I have to do some thing others in last two week. In carla_garage, we have to add some os.environ, now I think I can reproduce the result in carla_garage. But these os.environ does not need in plant, and I still can not get DS 82. I found another problem, I re-train the plant and use the 49th epoch, can get DS 77. |
Hi there, Can you share how do you get the result of DS=0.82? As I meet the same problem (DS 70, RC 81.8) using the setup the author released. Best |
Hi, |
Hi, I'm currently using a RTX3080. The CUDA version is 11.1 and I follow the conda envs, I'm wondering if the environmental variables for CARLA are fully listed in current repo? It seems that there is a drop in RC. So is it possible that some of the variables that capable of the route or waypoint selections are different from your original experiment? Thanks again for the quick response! Best, |
Here is the log of reproduced result:
|
Hei, Could you try disabling the tf_32 mode?
If this doesn't help, do you have any chance to test this on e.g. 2080 GPUs? Best, |
Thanks for your reply! Yep currently I'm using 3090 GPU, and I will try to add this config. Another problem is that in current CARLA testing (longest6), chances are that the ego vehicle is stuck in a jammed traffic and get blocked (not moving after 900 steps). Another situation (less than the previous one) is that after such jam, the ego vehicle remain still and get blocked. Have you encountered similar situations and much appreciated if you can share some tips in resolving this issue :) |
Thank you for your excellent work. I have the same problem when reproducing the results, I tried to add the following configurations: torch.backends.cuda.matmul.allow_tf32 = False torch.backends.cudnn.allow_tf32 = False but the results are still bad. |
Thanks for such excellent work. I want to follow PlanT and so I first want to reproduce the result mentioned in the paper(DS:81.36 RC:93.55 IS:0.87).
I download the pretrained model and run the evaluation by "python leaderboard/scripts/run_evaluation.py user=$USER experiments=PlanTmedium3x eval=longest6", just the same as "README.md".
However, I get the worse result (DS:70.29 RC:81.851 IS:0.866). It seems like because the Route Completion make the result lower than paper.
Could you give me some recommendation to get the result in paper? Thank a lot!
The text was updated successfully, but these errors were encountered: