Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for freeze-thaw of configurations where possible #143

Open
Neeratyoy opened this issue Apr 19, 2022 · 3 comments
Open

Allow for freeze-thaw of configurations where possible #143

Neeratyoy opened this issue Apr 19, 2022 · 3 comments
Labels
enhancement New feature or request

Comments

@Neeratyoy
Copy link
Collaborator

To allow for a wider multi-fidelity scope, it would be nice to allow configurations to be optionally restarted from a model checkpoint. This would be applicable for the tree-based search spaces (RandomForest, XGB) and the neural network spaces. That is if a call to the objective_function is made for a configuration that has already been evaluated on a particular fidelity, but on a higher unseen fidelity, the function should load a model for the configuration at the lower fidelity and continue training till the higher fidelity (add more trees or training more epochs). In such a case, the costs returned would indicate the cost involved in continuing the training.

Implementation of this into HPOBench would require considerations around the best API design that doesn't break existing APIs. While also checking how model loading and saving can be managed with the Docker interface. Also, it needs to be decided if the function evaluation costs should account for the model I/O to disk. Since ignoring them might affect the true cost of benchmark querying.

@Neeratyoy Neeratyoy added the enhancement New feature or request label Apr 19, 2022
@KEggensperger
Copy link
Contributor

Yes, this would be a useful addition and in principle, this could even work for all benchmarks including the tabular ones.

Here are a few things we should keep in mind when designing the API: This will make the benchmarks stateful, more complex, and requires additional memory since the benchmarks need to store which configs/seeds have been trained on which budget. There are several options for this, e.g. via a file, database, or in memory, each with pros and cons.

Additionally, this will interfere with optimizers evaluating configurations in parallel and raises further questions, e.g. whether and how to share the state among evaluations.

@PhMueller
Copy link
Collaborator

Hey guy,

This sounds like a very useful addition.

I know this might be a special scenario of freeze-thaw, but there are some scenarios in which "continue training" could simply mean loading some weights. E.g. the weights of a neural network, etc. This particular use-case/scenario is potentially easy to tackle. And might be a good starting point.

We could solve this case by implementing some hooks that are called before or after the objective_function call: on_train_start, on_train_finish, ...

These functions could be used to load or save the model weights to a directory defined in the hpobenchconfig.
We integrate these functions in the "workflow" of calling the objective function. I mean the user only calls the objective_function and hpobench automatically starts to look for a potential savepoint and loads that if present. After returning the result, it calls then a second function to save the current state of the model.

We could also create a second "baseclass" that makes clear that the benchmark does not support freeze-thaw.

However, @KEggensperger and I have already discussed that there are some cases for which that easy solution does not work.
But as mentioned first, it could be a first starting point. Freeze-Thaw as "moving a process in the background" is pretty hard for containers. Or at least it is not clear to me how to do it.

What do you think?

@Neeratyoy
Copy link
Collaborator Author

Thank you for your valuable input and for raising important points. Given you both have thought about this much longer, I would definitely like to have a more detailed conversation once I have a basic design or a prototype ready.


@KEggensperger

requires additional memory...

For the first iteration, I was thinking of going with some in-memory data structure of unique configurations evaluated, and their corresponding latest fidelity evaluated. One immediate issue would be how to handle duplicate queries. Also, we could include a reset() function to clear the memory and kind of bring the object to the initialization state again.

could even work for all benchmarks including the tabular ones

But we won't need this feature for tabular benchmarks, right?

interfere with optimizers evaluating configurations

Definitely a concern. However, just like we say that we don't support freeze-thaw now, can we later claim that we support freeze-thaw only for the 1-worker setup?


@PhMueller

might be a good starting point

Indeed, and I agree. For the first iteration, I want to take the sklearn MLP space we added, and allow freeze-thaw for that. That way we can test without changing the base class and affecting other spaces.

by implementing some hooks

Not too familiar with its implementation, but having used them, sounds apt.

save the model weights to a directory defined in the hpobenchconfig

Again, sounds perfect as a starting point. Just like the directory for the data for the tabular benchmarks are managed. In the long run, I wonder if it might be useful to allow a more flexible option for the user to edit the directory. Since I am not sure how the memory starts bloating with long runs of an instantiation of HPOBench. Secondly, given we don't allow checkpointing and resuming of HPOBench runs, we might want to define some operations in __del__ (or something better) to clean the directory storing the models from the objective_function calls. Or just clean the directory at __init__.

Freeze-Thaw as "moving a process in the background" is pretty hard for containers

This is something I have heard a few times now and I need to probably work with containers more to fully understand why. Can a process that runs on a container, not access files saved locally outside the container? Isn't that what the TabularBenchmark kind of does?

But as mentioned first, it could be a first starting point.t

I totally agree.


@KEggensperger @PhMueller

All the above points I mentioned are w.r.t. the first iteration of Freeze-Thaw for the MLP space, that we could try first.

Would be great to hear your thoughts on the same!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants