Skip to content

Commit

Permalink
remove experiment_dm.py, add insturctions for dm_control to README.md
Browse files Browse the repository at this point in the history
Former-commit-id: bcf31f066104b5c2b2282c2c2800b72d1ab292fb [formerly c879b9f01e402eb60ed1ddca6cc9b4daa3648043]
Former-commit-id: ab6662631145ce85aa21abf633bf2e63c53061bb
  • Loading branch information
zuoxingdong committed May 14, 2019
1 parent 3a6b4f8 commit 3ebdae9
Show file tree
Hide file tree
Showing 4 changed files with 22 additions and 291 deletions.
22 changes: 22 additions & 0 deletions baselines/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,25 @@ This example includes the implementations of the following reinforcement learnin

## RL
<img src='benchmark_rl.png' width='100%'>

## FAQ:
- How to train with [dm_control](https://github.com/deepmind/dm_control) environments?
- Modify `experiment.py`: use [dm2gym](https://github.com/zuoxingdong/dm2gym) wrapper, e.g.
```python
from gym.wrappers import FlattenDictWrapper
from dm_control import suite
from dm2gym import DMControlEnv

config = Config(
...
'env.id': Grid([('cheetah', 'run'), ('hopper', 'hop'), ('walker', 'run'), ('fish', 'upright')]),
...
)

def make_env(config, seed):
domain_name, task_name = config['env.id']
env = suite.load(domain_name, task_name, environment_kwargs=dict(flat_observation=True))
env = DMControlEnv(env)
env = FlattenDictWrapper(env, ['observations'])
...
```
96 changes: 0 additions & 96 deletions baselines/ddpg/experiment_dm.py

This file was deleted.

96 changes: 0 additions & 96 deletions baselines/sac/experiment_dm.py

This file was deleted.

99 changes: 0 additions & 99 deletions baselines/td3/experiment_dm.py

This file was deleted.

0 comments on commit 3ebdae9

Please sign in to comment.