Pong AI's trained with various reinforcement learning methods compete each other in a multiagent framework that allows RL Agent vs RL Agent, Human vs RL Agent, and Human vs Hardcoded AI interactions
RL Algorithms implemented: Advantage-Actor-Critic (A2C), Vanilla Policy Gradient (VPG)
Model Types: (Models/)
- Conv_Model: Convolutional neural network
- Gen_FC: Simple feedforward neural network
- DummyModel: Placeholder for players without a brain
Player Types: (Players/)
- HardcodedOpponent: Pre-programmed player that follows the ball with its paddle
- VPG_Player: Player that learns to play using the Vanilla Policy Gradient reinforcement learning algorithm
- ActorCritic_Player: Player that learns to play using the Actor-Critic reinforcement learning algorithm
- ES_Player: Player that learns to play using an Evolutionary Strategies algorithm
- HumanPlayer: Player that moves paddle according to keyboard input, allowing you to play against a trained policy.
Note that in train.py, any player can easily be swapped out with another one
Running VPG (Red) vs ActorCritic (Blue) after training for 100 epochs with rewards for scoring and hitting the ball:
Interestingly, both agents learn to cooperate with each other, rallying the ball to farm the reward for hitting the ball
Tensorboard VPG(Green) vs ActorCritic(Gray):
When the reward for hitting the ball is removed, the agents learn to score against each other.
150 epochs after removing reward for touching the ball
The agents now play a zero sum game as seen in the rewards graph | VPG(Gray), ActorCritic(Orange):
To train, run Train.py