You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it would be better to separate the optimisation from the actual learned model (the regressors).
Reason: What's not so nice right now is that we store the SupervisedDescentOptimiser to the disk, with all its type information about the solver (e.g. LinearRegressor<PartialPivLUSolver>), and we store the regularisation as well.
Both the solver and the regularisation are only relevant at training time, and there's neither need to store them nor should we need to know the type of the solver when we load the model.
Also, it would mean a user that just wants to use the landmark detection (and not train a model) wouldn't need Eigen, because Eigen is only needed in LinearRegressor::learn() and not in predict().
Regarding the regulariser, we could just choose to exclude it from the serialisation, but I don't think it's very intuitive to only serialise half of a class. I think there must be a better solution that solves the other shortcomings as well. Maybe we can even just make SupervisedDescentOptimiser::train() a free function and get rid of the class.
A related project, tiny-cnn, doesn't separate the model from the optimisation, but I kind of feel like we should.
The text was updated successfully, but these errors were encountered:
@patrikhuber ,when training the images, is it the more images nums,the betrer?I find the result of training 300 pics is better than 800 pics.Is it that the training large numbers of pics will cause the overfit problem? And when in training how many pics will perform a better result,how is ur test result, Is ur testing set from the ibug website or others? thanks so much
I think it would be better to separate the optimisation from the actual learned model (the regressors).
Reason: What's not so nice right now is that we store the
SupervisedDescentOptimiser
to the disk, with all its type information about the solver (e.g.LinearRegressor<PartialPivLUSolver>
), and we store the regularisation as well.Both the solver and the regularisation are only relevant at training time, and there's neither need to store them nor should we need to know the type of the solver when we load the model.
Also, it would mean a user that just wants to use the landmark detection (and not train a model) wouldn't need Eigen, because Eigen is only needed in
LinearRegressor::learn()
and not inpredict()
.Regarding the regulariser, we could just choose to exclude it from the serialisation, but I don't think it's very intuitive to only serialise half of a class. I think there must be a better solution that solves the other shortcomings as well. Maybe we can even just make
SupervisedDescentOptimiser::train()
a free function and get rid of the class.A related project, tiny-cnn, doesn't separate the model from the optimisation, but I kind of feel like we should.
The text was updated successfully, but these errors were encountered: