Skip to content

Latest commit

 

History

History
45 lines (33 loc) · 3.16 KB

README.md

File metadata and controls

45 lines (33 loc) · 3.16 KB

dynamics-learn

Working directory for my work on model-based reinforcement learning for novel robots. Best for robots with high test cost and difficult to model dynamics. Contact: [email protected] First paper website: https://sites.google.com/berkeley.edu/mbrl-quadrotor/ There is current future work using this library, such as attempting to control the Ionocraft with model-based RL. https://sites.google.com/berkeley.edu/mbrl-ionocraft/

Note that I have been very actively developing in this repo, please reach out if you have any questions of accuracy in the readme.

This directory is working towards an implementation of many simulated model-based approaches on real robots. For current state of the art in simulation, see this work from Prof Sergey Levine's group: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models.

Future implementations work towards controlled flight of the ionocraft, with a recent publication in Robotics and Automation Letters and in the future for transfer learning of dynamics on the Crazyflie 2.0 Platform.

Some potentially noteable implementations include:

  • probablistic nueral network in pytorch
  • gaussian loss function for said pytorch probablistic neural network
  • random shooting MPC implementation with customizable cost / reward function (See cousin repo: https://github.com/natolambert/ros-crazyflie-mbrl)

Usage is generally of the form, with hydra enabling more options:

$ python learn/trainer.py robot=iono

Main Scripts:

  • learn/trainer.py: is for training dynamics models (P,PE,D,DE) on experimental data. The training process uses Hydra to allow easy configuration of which states are used and how the predictions are formatted.
  • learn/simulate_mpc.py: a script that runs MBRL with a MPC on a simulated environment.
  • learn/bo.py: For generating PID parameters using a dynamics model as a simulation environment. This will eventually extend beyond PID control. See the controllers directory learn/control. I am working to integrate opto.
  • learn/plot.py: a script for viewing different types of predictions, under improvement

In Development:

Related Code for Experiments:

CF Firmware: https://github.com/natolambert/crazyflie-firmware-pwm-control

Ros code: https://github.com/natolambert/ros-crazyflie-mbrl