Skip to content

Latest commit

 

History

History
59 lines (44 loc) · 2.83 KB

README.md

File metadata and controls

59 lines (44 loc) · 2.83 KB

Harnessing Structures for Value-Based Planning and Reinforcement Learning

This repository contains the implementation code for paper Harnessing Structures for Value-Based Planning and Reinforcement Learning (ICLR 2020, Oral).

This work proposes a generic framework that allows for exploiting the underlying low-rank structures of the state-action value function (Q function), in both planning and deep reinforcement learning. We verify empirically the wide existence of low-rank Q functions in the context of control and deep RL tasks. Specifically, we propose (1) Structured Value-based Planning (SVP), for classical stochastic control and planning tasks, and (2) Structured Value-based Deep Reinforcement Learning (SV-RL), applicable for any value-based techniques to improve performance on deep RL tasks.

Installation

Prerequisites

The current code has been tested on Ubuntu 16.04, for both SVP and SV-RL.

  • SVP: The SVP part is mainly implemented in Julia (and a small part in Python) for several classical stochastic control tasks. We use Julia version of v0.7.0, which can be downloaded here.
  • SV-RL: We provide a PyTorch implementation of SV-RL for deep reinforcement learning tasks.

Note: We test SVP implementation on Julia v0.7.0, which is not the latest version (and is unmaintained now). You may choose to use later verion of Julia if needed, but we didn't test on other versions.

Dependencies for SVP

After installing Julia, just use the package manager within Julia to install the following dependencies:

using Pkg
Pkg.add("IJulia")
Pkg.add("PGFPlots")
Pkg.add("GridInterpolations")
Pkg.add("PyCall")
Pkg.add("ImageMagick")

Dependencies for SV-RL

You can install the dependencies for SV-RL using

pip install -r requirements.txt

Experiments

Acknowledgements

We use the implemetation in the fancyimpute package for part of our matrix estimation algorithms. The implementation of SVP is partly based on this work.

Citation

If you find the idea or code useful for your research, please cite our paper:

@inproceedings{
  yang2020harnessing,
  title={Harnessing Structures for Value-Based Planning and Reinforcement Learning},
  author={Yuzhe Yang and Guo Zhang and Zhi Xu and Dina Katabi},
  booktitle={International Conference on Learning Representations},
  year={2020},
  url={https://openreview.net/forum?id=rklHqRVKvH}
}