-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
L2 regularization for constant optimization #23
Comments
Hi, Regularization is currently not supported but we can look into adding it. Is there an official paper or link to the pytorch implementation? |
Note, that regularization can be misleading or unwanted for expressions with nonlinear parameters that GP might produce Regularization of the raw parameters may lead to an evolutionary advantage of nonlinear transformations of parameters to allow 'virtual' large parameters. |
You are right, but this is true only for specific primitives, right? In other words, if the primitive set is, e.g., allowed_symbols = "add,sub,mul,sin,cos,constant,variable" there is no such problem. |
Yes, in this case this is less of a problem. |
By the way, here are the implementations available in Operon: Currently not all are exposed in the Python wrapper, but it's trivial to add them. |
Hi,
is there a way to penalize the magnitude of the constants (via, e.g., L2 regularization)? I am trying to fit a
SymbolicRegressor
with some noisy data and sometimes I get very large values for some constants.I looked inside the library and it seems that it is possible to choose, as constant optimizer,
adamax
andamsgrad
:In the$L^2$ regularization and I was guessing if it is possible to do so also for
pytorch
implementation of these optimizers there is a parameterweight_decay
for thepyoperon
.The text was updated successfully, but these errors were encountered: