Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add experimentation over different exploration strategies (Experimentation.py) #115

Open
sidsen opened this issue Nov 28, 2017 · 0 comments

Comments

@sidsen
Copy link
Contributor

sidsen commented Nov 28, 2017

Currently, the experimentation script uses cb_adf to optimize a CB policy from exploration data. This ignores the cost of exploration and hence reports misleading numbers to the user (or, the user should be advised to use the script to compare the relative performance of parameters, not the absolute numbers). We should move towards using cb_explore_adf, as this reflects what users will see in production, but then the problem becomes deciding which exploration algorithm to use. This can currently be specified as an input parameter, but the script should eventually automate this.

Question: How do we deal with the fact that increasing exploration (e.g. higher epsilon in epsilon greedy) will naturally lead to worse performance. Perhaps we need to specify an allowed degradation budget relative to the best policy trained by cb_adf.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant