A generalized framework for generative and probabilistic modelling to for training reinforcement learning agents in TensorFlow.
Many pricing and decision making problems at the core of Grab’s ride-hailing and deliveries business can be formulated as reinforcement learning problems, with interactions of millions of passengers, drivers and merchants from over 65 cities across the Southeast Asia region.
from data.synthetic import get_normal_data, plot_data
from model.gmm import GMM
# Get 4 clusters of 1000 normally distributed synthetic data
X, y = get_normal_data(1000, plot=True)
# Fit a Gaussian Mixture Density Network
gmm = GMM(x_features=2,
y_features=1,
n_components=32,
n_hidden=32)
gmm.fit(X, y, epochs=20000)
# Predict y given X
y_hat = gmm.predict(X)
plot_data(X, y_hat)
Feature models are used in reinforcement learning for generating features that represent the state during agent-environment interactions.
The Gaussian Mixture Density Network consists of a neural network to predict parameters that define the Gaussian mixture model.
The Conditional Generative Adversarial Network consists of a generator network that generates candidate features and a discriminator network that evaluates them, both conditioned on parent features, that contest in optimisation.
Response models are used in reinforcement learning for the uncertainty modelling of distributional rewards instead of point estimations, to enable stable learning of the agent in cases of spiky responses.
The Bayesian Neural Network is a neural network with weights assigned a probability distribution to estimate uncertainty and trained using variational inference.
The Monte Carlo Dropout is a method shown to approximate Bayesian inference.
The Deep Ensemble is an ensemble of randomly-initialised neural networks that performs better than Bayesian neural networks in practice.
The performance metrics computed are the Kullback-Leibler divergence and Jensen-Shannon divergence, computed by splitting the data into histogram bins.
The visualisation tools implemented include the probability density surface plot (left) that visualises the probability densities at each coordinate, and the grid violin relative density plot (right) that visualises the relative densities between the actual data and the generated data of the fitted model using histograms.
Hyperparameter optimisation is implemented using Bayesian optimisation in the Ax framework, building a smooth surrogate model of outcomes using Gaussian processes from noisy observations from previous rounds of parameterizations to predict performance at unobserved parameterizations, tuning parameters in fewer iterations than grid search or global optimisation techniques.
- Mei, L. I. N., and Christopher William DULA. "Grab taxi: Navigating new frontiers." (2016): 40.
- Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.
- Bishop, Christopher M. "Mixture density networks." (1994).
- Mirza, Mehdi, and Simon Osindero. "Conditional generative adversarial nets." arXiv preprint arXiv:1411.1784 (2014).
- Blundell, Charles, et al. "Weight uncertainty in neural networks." arXiv preprint arXiv:1505.05424 (2015).
- Gal, Yarin, and Zoubin Ghahramani. "Dropout as a bayesian approximation: Representing model uncertainty in deep learning." international conference on machine learning. 2016.
- Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. "Simple and scalable predictive uncertainty estimation using deep ensembles." Advances in neural information processing systems. 2017.
- Fort, Stanislav, Huiyi Hu, and Balaji Lakshminarayanan. "Deep ensembles: A loss landscape perspective." arXiv preprint arXiv:1912.02757 (2019).
- Chang, Daniel T. "Bayesian Hyperparameter Optimization with BoTorch, GPyTorch and Ax." arXiv preprint arXiv:1912.05686 (2019).
- Dataset at https://www.kaggle.com/aungpyaeap/supermarket-sales.
- Dataset at https://www.kaggle.com/binovi/wholesale-customers-data-set.
- Dillon, Joshua V., et al. "Tensorflow distributions." arXiv preprint arXiv:1711.10604 (2017).