How many environments exist in Deepbots? #43
-
Hi, guys! We found your package recently and tested it with your Deepworlds, and the effect is amazing! Greats thanks to you guys for developing such a useful interface and it really help us to save many efforts. We are going to use DRL to control a robot manipulator with 6 joints and we want to first do the simulation in webots. Our question is that: How many environments exist in Deepbots? For example, in mujoco you might have a runner, half-cheetah, cheetah, shadow-hand, fetch, ur... etc. Creating these environments takes some time... If it is that not many, do you have suggestion about how to quickly adding environments into your Deepbots? Waiting for your response. Thanks. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hello @andyleeeeeeeeee. Thank you for your kind words and for using deepbots!
You can find all the currently available environments in the deepworlds repository, solved with existing RL agents. You can swap out the agents with your own to test them out. We have plans of adding more in the future and also adding environments without implemented solutions so it's easier for you to plug in your agent and try it out. The environments currently are CartPole, FindAndAvoid, PitEscape. You can find more information in the READMEs. Feel free to ask for any extra information.
To clear things a bit, you are not forced to add any environments in deepbots. You can implement your own environments, install deepbots and use it straightaway! You can take a look at the in-depth tutorial we created for the CartPole example. Nevertheless, if you do implement a new environment (meaning mainly the Webots world file and the implemented abstract deepbots methods) and want to share it, we will be happy to add it into the deepworlds repository as an example for future users. As far as i can tell, your course of action is to first use Webots to create the world for training and testing your robot manipulator. Then you can use deepbots to "wrap" the simulation as a gym environment and use your RL agent of choice to train, test, etc. What deepbots offers is a way to easily connect your RL agents with the Webots simulation. I hope my answer proves helpful. Feel free to ask any question and we will be happy to help! |
Beta Was this translation helpful? Give feedback.
Hello @andyleeeeeeeeee. Thank you for your kind words and for using deepbots!
You can find all the currently available environments in the deepworlds repository, solved with existing RL agents. You can swap out the agents with your own to test them out. We have plans of adding more in the future and also adding environments without implemented solutions so it's easier for you to plug in your agent and try it…