This project was developed by Simon Lund, Sophia Sigethy, Georg Staber, and Malte Wilhelm for the Applied Reinforcement Learning SS 21 course at LMU.
As part of the course we created an extensive report as well as a final presentation of the project.
The RL agent swings up using either side.
cartpole_75k_cos.mp4
The RL agent avoids the noisy section on the left and swings up on the right side.
cartpole_75k_cos_uncert.mp4
Start a development environment in your browser by clicking the button above. This gets you going quickly, but does not include a graphical output from the gym enironment.
git clone https://github.com/github-throwaway/ARL-Model-RL-Unsicherheit.git
cd ARL-Model-RL-Unsicherheit/
pip install -r requirements.txt # or python setup.py install
Uses preconfigured system with trained model and agent with default configuration.
cd src/
python main.py
For the sake of usability, we implemented an argument parser. By passing some predefined arguments to the python program call, it is possible to start different routines and also change hyperparameters needed by the algorithms. This enables the user to run multiple tests with different values without making alterations to the code. This is especially helpful when fine-tuning hyperparameters for reinforcement learning algorithms, like PPO. To get an overview of all the possible arguments, and how these arguments can be used, the user may call python main.py --help
.
The project was evaluated using the following parameters.
Parameter | Value |
---|---|
noisy sector | 0 - π (left half of unit circle) |
noise offset | 0.3 |
observation space | continuous, 5 dimensional (xpos, xdot, theta dot, theta sin, theta xos) |
action space | discrete, 10 actions |
NN epochs | 100 |
time series length | 4 |
reward function | [centered, right, boundaries, best, cos, xpos_theta_uncert] |
RL algorithms | PPO |