You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently trying to modify my current PPO code for ManiSkill GPU sim (mostly based on clean rl as well) that is modified to do everything on the GPU. I am trying to squeeze as much performance out as possible and am reviewing the torch compile ppo continuous code right now.
A few questions
I can probably remove all the .to(device) calls of tensordicts right? e.g.
And is the original non_blocking meant to to ensure we don't eagerly move it until we need to (e.g. next inference step in the rollout)?
How bad are .eval, .train calls? and how come they should be avoided? I thought they were like simple switches
Are there potentially any environment side optimizations to make RL faster? Im aware of some things that can be made to do be done non-blocking, I wonder if the same can be done for environment observations and rewards? Are there other tricks?
Thanks! Looking forward to trying to set some training speed records with these improvements!
The text was updated successfully, but these errors were encountered:
Currently trying to modify my current PPO code for ManiSkill GPU sim (mostly based on clean rl as well) that is modified to do everything on the GPU. I am trying to squeeze as much performance out as possible and am reviewing the torch compile ppo continuous code right now.
A few questions
LeanRL/leanrl/ppo_continuous_action_torchcompile.py
Lines 206 to 209 in a416e61
And is the original non_blocking meant to to ensure we don't eagerly move it until we need to (e.g. next inference step in the rollout)?
Thanks! Looking forward to trying to set some training speed records with these improvements!
The text was updated successfully, but these errors were encountered: