You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Visit RNN_Policy branch for RNN Policy implementation instead of CNN Policy.
Implementation for SuperMarioBros-1-1-v0 has been added. It's been configured to receive no rewards until reaching the flag! Visit mario branch for the code.
Implementation of the Exploration by Random Network Distillation on Montezuma's Revenge Atari game. The algorithm simply consists of generating intrinsic rewards based on the novelty that the agent faces and using these rewards to reduce the sparsity of the game. The main algorithm to train the agent is Proximal Policy Optimization which is able to combine extrinsic and intrinsic rewards easily and has fairly less variance during training.
Demo
RNN Policy
CNN Policy
Super Mario Bros
Results
RNN Policy
CNN Policy
Important findings to mention
As it has been mentioned in the paper, one of the obstacles that impact seriously the performance of the agent is the Dancing with Skulls. During the test time and also by observing the Running Intrinsic Reward during the training time, it got clear that most of the time, the agent is extremely willing to play with skulls, spiders, laser beams and etc. since those behaviors produce considerable intrinsic rewards.
Kernel_size of this part of the original implementation is wrong; it should be 3 (same as the DQN nature paper) but it is 4.
The usage of RewardForwardFilter in the original implementation is definitely wrong, as it's been pointed here and solved here.
Table of hyper-parameters
By using the max and skip frames of 4, max frames per episode should be 4500 so 4500 * 4 = 18000 as it has been mentioned in the paper.
Brain dir includes the neural networks structures and the agent decision-making core.
Common includes minor codes that are common for most RL codes and do auxiliary tasks like: logging, wrapping Atari environments, and... .
main.py is the core module of the code that manages all other parts and make the agent interact with the environment.
Models includes a pre-trained weight that you can use to play or keep training by it, also every weight is saved in this directory.
Dependencies
gym == 0.17.3
matplotlib == 3.3.2
numpy == 1.19.2
opencv_contrib_python == 4.4.0.44
torch == 1.6.0
tqdm == 4.50.0
Installation
pip3 install -r requirements.txt
Usage
How to run
usage: main.py [-h] [--n_workers N_WORKERS] [--interval INTERVAL] [--do_test]
[--render] [--train_from_scratch]
Variable parameters based on the configuration of the machine or user's choiceoptional arguments: -h, --help show this help message and exit --n_workers N_WORKERS Number of parallel environments. --interval INTERVAL The interval specifies how often different parameters should be saved and printed, counted by iterations. --do_test The flag determines whether to train the agent or play with it. --render The flag determines whether to render each agent or not. --train_from_scratch The flag determines whether to train from scratch or continue previous tries.
In order to train the agent with default arguments, execute the following command (You may change the number of workers and the interval based on your desire.):
python3 main.py --n_workers=128 --interval=100
If you want to keep training your previous run (deactivating training from scratch), execute the following:
If you want the agent to play, execute the following:
python3 main.py --do_test
Hardware requirements
The whole training procedure with 32 workers can be done on Google Colab and it takes 2 days of training, thus a machine with a similar configuration would be sufficient, but if you need a more powerful free online GPU provider and to increase the number of environments to 128 and above, take a look at paperspace.com.