You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was testing whether I could fit a SymbolicRegressor up to, say, 1000 generations, see the Pareto front, and then continue training for another 1000 generations. However, it seems that if I do
reg=SymbolicRegressor()
reg.fit(X_train, y_train)
play with reg and then
reg.fit(X_train, y_train)
again, the reg object is the same as before the second fit call. Now, given that reg has a Pareto front, wouldn't I be able to continue fitting "a la" online learning/batch/partial fit way? I'm trying to "brute force" a work-around for the lack of callbacks (see #18).
Cheers
The text was updated successfully, but these errors were encountered:
Hi, this looks like an easy improvement, but it will still require some changes to the C++ library. Currently, each call to fit initializes a new C++ algorithm, runs it, and keep some stats and results (like the pareto front) from it. But when fit is done the C++ object doesn't exist anymore. However, it should be easy enough to implement a kind of warm start mechanism.
Yes, a warm start mechanism would be super useful! Already thinking about the possibility of using things like hyperband, which ideally need warm start mechanisms.
Hi,
I was testing whether I could fit a
SymbolicRegressor
up to, say,1000
generations, see the Pareto front, and then continue training for another1000
generations. However, it seems that if I doplay with
reg
and thenagain, the
reg
object is the same as before the secondfit
call. Now, given thatreg
has a Pareto front, wouldn't I be able to continue fitting "a la" online learning/batch/partial fit way? I'm trying to "brute force" a work-around for the lack of callbacks (see #18).Cheers
The text was updated successfully, but these errors were encountered: