-
-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parameter fitting to multiple experiments not working #1073
Comments
Have you tried fitting parameters using PEtab (https://github.com/sebapersson/PEtab.jl?tab=readme-ov-file)? Generally, it is more robust then using DiffEqParamEstim. I'd probably recommend that as the go-to choice, and think it should work for this application. |
We don’t currently have a Petab tutorial it seems. |
@lisa-schroeder this might help: https://docs.sciml.ai/ModelingToolkit/stable/examples/remake/ ModelingToolkit has been making / made a number of changes to how parameters are stored and accessed / queried, particularly when used in higher level applications. The parameters you are optimizing are now the We can try to adapt that tutorial to Catalyst to make it easier to see how it is done for a Catalyst model (and post an update here for you), but it may take a little time. |
@TorkelE we should just drop DiffEqParamEstim as we've discussed previously. It adds little over just writing the loss function oneself when one has to handle all the MTK-related parameter updating (I'm not even sure it works with the workflow in that link I give above). |
Yeah, the PEtab tutorial was dropped while PEtab was updating to the latest MTK version (which was recently completed, so I will try to reenable that one). There is decent amount of documentation on the PEtab page though. Yes, it should be dropped (i.e. just replaced with a custom cost function). |
Thanks for answering so fast! I'll have a go and see if I can solve it now |
Maybe a bit late but multi-experiment is something we support in PEtab.jl and you can find a tutorial for it here: https://sebapersson.github.io/PEtab.jl/stable/petab_simcond/, and as a core feature PEtab.jl also integrates with Catalyst for model definition. |
Thanks to your comments, I have now been able to fit the parameters to multiple experiments. using Catalyst
using PEtab
using Plots
using DataFrames
using Distributions
using DifferentialEquations
using OptimizationOptimJL
# creating reaction network
t = default_t()
@parameters k1, k_1, k2, k3, sigma
@species A(t), B(t), C(t), D(t), E(t), F(t)
rn = @reaction_network begin
(k1, k_1), A + B <--> C
k2, C --> D + E
k3, D --> F
end
# observables, experimental values
obs_A = PEtabObservable(A, sigma)
obs_B = PEtabObservable(B, sigma)
obs_F = PEtabObservable(F, sigma)
observables = Dict("obs_A" => obs_A, "obs_B" => obs_B, "obs_F" => obs_F)
p_k1 = PEtabParameter(:k1)
p_k_1 = PEtabParameter(:k_1)
p_k2 = PEtabParameter(:k2)
p_k3 = PEtabParameter(:k3)
p_sigma = PEtabParameter(:sigma)
pest = [p_k1, p_k_1, p_k2, p_k3, p_sigma]
# specifying simulation conditions
cond1 = Dict(:A => 1.0, :B => 0.5)
cond2 = Dict(:A => 1.2, :B => 0.5)
cond3 = Dict(:A => 1.0, :B => 0.8)
conds = Dict("cond1" => cond1, "cond2" => cond2, "cond3" => cond3)
# Generate synthetic data
u0 = [:A => 1.0, :B => 0.5, :C => 0.0, :D => 0.0, :E => 0.0, :F => 0.0]
ps = [:k1 => 1.0, :k_1 => 0.1, :k2 => 2.0, :k3 => 0.8]
tspan = (0.0, 10)
prob = ODEProblem(rn, u0, tspan, ps)
N = 3
u0_all = [[:A => 1.0, :B => 0.5, :C => 0.0, :D => 0.0, :E => 0.0, :F => 0.0],
[:A => 1.2, :B => 0.5, :C => 0.0, :D => 0.0, :E => 0.0, :F => 0.0],
[:A => 1.0, :B => 0.8, :C => 0.0, :D => 0.0, :E => 0.0, :F => 0.0]]
function prob_func(prob, i, repeat)
p = [:k1 => prob.ps[k1], :k_1 => prob.ps[k_1], :k2 => prob.ps[k2], :k3 => prob.ps[k3]]
ODEProblem(rn, u0_all[i], tspan, p)
end
enprob = EnsembleProblem(prob, prob_func = prob_func)
sim = solve(enprob, Rodas5P(), trajectories=N, saveat=0:0.5:10.0)
data_time = repeat(sim[1].t,9)
data_vals = vcat(sim[1][:A], sim[1][:B], sim[1][:F], sim[2][:A], sim[2][:B], sim[2][:F], sim[3][:A], sim[3][:B], sim[3][:F])
cond_string = []
obs_string = []
datasize = size(sim[1].t)[1]
for iexp in 1:3
append!(cond_string, repeat(["cond"*string(iexp)], 3*datasize))
append!(obs_string, repeat(["obs_A"], datasize))
append!(obs_string, repeat(["obs_B"], datasize))
append!(obs_string, repeat(["obs_F"], datasize))
end
# mapping measurements to simulation conditions
measurements = DataFrame(simulation_id=cond_string, obs_id=obs_string, time=data_time, measurement=data_vals)
# create PEtabProblem
model = PEtabModel(rn, observables, measurements, pest; simulation_conditions = conds)
petab_prob = PEtabODEProblem(model)
# Parameter estimation
x0 = get_startguesses(petab_prob, 1) # generates random guess
x0.log10_k1 = 0.5; x0.log10_k_1 = -2.5; x0.log10_k2 = 0.8; x0.log10_k3 = -0.2
res = calibrate(petab_prob, x0, IPNewton())
fitted_p = res.xmin
plot(res, petab_prob; linewidth = 2.0)
|
I am new to Julia, so maybe I am overseeing an important point here.
I am trying to fit parameters to multiple experiments. Fitting it to one set of experimental values works nicely, as well as simulating the concentrations over time for different starting concentrations using an ensemble.
Somehow, if I am trying to fit this simple example in the following to multiple experiments, as explained in the documentation of catalyst (which is then forwarded to DiffEqParamEstim), the optimization is not working.
I had to insert
p = prob.ps[[k1, k_1, k2, k3]]
into the prob_func, because the parameter p needs to be a dict (although what I generated here is not a dict, but still working?).This does not work for the fitting, because here the prob cannot be indexed like that. I could generate a dict using
prob.p[1].value
, but weirdly this doesn't stay consistent: During one optimization,prob.p
withinprob_func
sometimes is aVector{Float64}
, and sometimes aVector{ForwardDiff.Dual{ForwardDiff.Tag{DifferentiationInterface.FixTail{DiffEqParamEstim.var"#29#30"{Nothing, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR),...
So how can I fit my parameters to the data of multiple experiments? Thank you for helping me!
The text was updated successfully, but these errors were encountered: