Skip to content
This repository has been archived by the owner on Jan 5, 2024. It is now read-only.

Documentation file #52

Open
haneenhassen opened this issue Oct 24, 2023 · 0 comments
Open

Documentation file #52

haneenhassen opened this issue Oct 24, 2023 · 0 comments

Comments

@haneenhassen
Copy link

Dear Ecoffet,
As an MSc student, I am currently working on implementing the explore method in the MDPO algorithm, as described in your paper titled "Mirror Descent Policy Optimization" (https://arxiv.org/pdf/2005.09814.pdf). I have been trying to locate the documentation file for this method, but unfortunately, I have been unable to find it.
I would greatly appreciate it if you could provide me with any instructions or guidance on how to implement the explore method in the MDPO algorithm.
Thank you in advance for your assistance. I am eager to learn and apply this method to further enhance the MDPO algorithm.
Gratefully,
Haneen

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant