You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have a couple of questions regarding the uncertainties in the fits when I use a mean function model. Specifically, I am trying to fit the light curves of Type Ia Supernovae. For the GP mean function, I am using equation (7) of Zheng et al. (2018), which I implemented using george.modeling.Model. As a kernel I use:
where bounds_var and bounds_length are the bounds for the respective hyperparameters. I also included bounds for the parameters of the mean function.
Usually, this works well. However, sometimes the uncertainties in the fits are much smaller (almost zero) than the ones from the data (like if it were trusting the mean function blindly). I think this does not completely depend on the size of the uncertainties of the data. If I don't use the mean function model, but use a constant value (as george does by default), I get more realistic errors in the fits. I don't know why this happens.
My second question is regarding the bounds. For some reason, sometimes, if the value of one of the hyperparameters falls outside the given bounds, I get the error "non-finite log prior value". This usually happens with the constant kernel (k1) which I defined above. Shouldn't the bounds prevent this from happening? I haven't had this issue with the parameters of the mean function, it always seems to be k1. Thanks!
The text was updated successfully, but these errors were encountered:
Hi, I have a couple of questions regarding the uncertainties in the fits when I use a mean function model. Specifically, I am trying to fit the light curves of Type Ia Supernovae. For the GP mean function, I am using equation (7) of Zheng et al. (2018), which I implemented using
george.modeling.Model
. As a kernel I use:where
bounds_var
andbounds_length
are the bounds for the respective hyperparameters. I also included bounds for the parameters of the mean function.Usually, this works well. However, sometimes the uncertainties in the fits are much smaller (almost zero) than the ones from the data (like if it were trusting the mean function blindly). I think this does not completely depend on the size of the uncertainties of the data. If I don't use the mean function model, but use a constant value (as
george
does by default), I get more realistic errors in the fits. I don't know why this happens.My second question is regarding the bounds. For some reason, sometimes, if the value of one of the hyperparameters falls outside the given bounds, I get the error "non-finite log prior value". This usually happens with the constant kernel (
k1
) which I defined above. Shouldn't the bounds prevent this from happening? I haven't had this issue with the parameters of the mean function, it always seems to bek1
. Thanks!The text was updated successfully, but these errors were encountered: