Replies: 1 comment
-
The sequential thresholding (the ST of STLSQ) handles the sparsification. I can't speak to historical reasons, but you can directly use LASSO or other sklearn optimizers, e.g. from pysindy import WrappedOptimizer
from sklearn.linear_model import Lasso
optimizer = WrappedOptimizer(
Lasso(alpha=0.1, fit_intercept=False, max_iter=1), unbias=False
) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
While going through the STLSQ code , i noticed that L2 regularization is used:
I also looked into the paper "Brunton, Steven L., Joshua L. Proctor, and J. Nathan Kutz."Discovering governing equations from data by sparse identification of nonlinear dynamical systems." which shows the code for STLSQ but doesn't have regularization.
I was wondering why L2 regularization is used in the Sindy STLSQ? as L1 regularization promotes sparsity would it not be a better regularization? Is it related to the stability and convergence? Can anyone explain this or any resources which explain this?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions