Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle inhomogeneous parameters using a Tuple of Vectors #2231

Merged
merged 19 commits into from
Sep 14, 2023
Merged

Conversation

YingboMa
Copy link
Member

@YingboMa YingboMa commented Aug 15, 2023

Copy link

@ai-maintainer ai-maintainer bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI-Maintainer Review for PR - Handle inhomogeneous parameters using a Tuple of Vectors

Title and Description ⚠️

Title is clear but description is missing
The title of the pull request is clear and provides a good summary of the changes. However, the description is missing. It would be beneficial if the contributor could provide a description explaining the motivation behind the changes, the approach taken, and any relevant details.

Scope of Changes 👍

Changes are narrowly focused
The changes in the pull request appear to be narrowly focused on handling inhomogeneous parameters using a Tuple of Vectors. The modifications are primarily in the `parameters.jl` file, with some related changes in the `abstractodesystem.jl` and `utils.jl` files. This suggests that the changes are addressing a specific issue or feature related to parameter handling.

Testing ⚠️

Testing details are missing
The description does not provide any details on how the changes were tested. It would be helpful if the contributor could provide information on the testing approach, such as specific test cases used or any additional testing frameworks employed.

Code Documentation 👍

All existing functions, classes, or methods have docstrings
Based on the provided diff, there are no newly added functions, classes, or methods. Therefore, there are no functions, classes, or methods that require docstrings.

Suggested Changes

Please provide a detailed description of the changes made in the pull request. This should include the motivation behind the changes, the approach taken, and any relevant details.

Additionally, please provide details on how the changes were tested. This could include specific test cases used, any additional testing frameworks employed, and the results of these tests.

Thank you for your contribution!

Reviewed with AI Maintainer

@YingboMa
Copy link
Member Author

We still need to update all the rest of the generate_* function like https://github.com/SciML/ModelingToolkit.jl/pull/2231/files#diff-7195a73d9d71f7a993fe71086614d700629ebc22306be7b602f60fbc054bcdb7R340-R348 . @bradcarman do you mind to finish this PR up?

@bradcarman
Copy link
Contributor

OK sure thing, I'll finish it up.

@bradcarman
Copy link
Contributor

I've updated to support Tuple parameter type by default. I removed the keyword split_paramters, this can be determined automatically. Here are the rules I propose for ODEProblem...

If the parameter defaults and map contains...

  • a mix of single values and arrays, then a Tuple of the mixed types is returned always for p [see test/split_parameters.jl:53]
  • a mix of single values and no arrays, then
    • if tofloat=false a Tuple of the mixed types is returned for p [see test/odesystem.jl:738]
    • if tofloat=true then the types are promoted to float and a Vector is returned for p [see test/odesystem.jl:741]
  • single values of all the same type
    • if tofloat=false a Vector of the preserved type is returned [see test/odesystem.jl:752]
    • if tofloat=true a Vector of promoted float is returned [see test/odesystem.jl:755]

tofloat is defaulted to true as it was before. The promote_to_concrete keyword use_union is now also defaulted to true and decoupled from tofloat. When use_union=true, this allows for the rules described above, promote_to_concrete will return a Union of the types, which are then converted to a Tuple of grouped types when creating an ODEProblem.

I'm assuming the other systems DiscreteSystem, JumpSystem, NonlinearSystem, etc. should also follow these rules and defaults?

@bradcarman
Copy link
Contributor

Still work to do, need to support jacobians, etc...

@bradcarman
Copy link
Contributor

@YingboMa I added a observables test at the end of test/split_parameters.jl. I'm struggling to figure out how observables need to be modified to work with the new Tuple parameter type. Can you have a look?

@baggepinnen
Copy link
Contributor

Will any aspect of this be user facing? If so, some docs would be welcome.

I understand that these changes are to some extent brought by the desire to pass input signals

  • Does this new functionality accommodate sampling of input signals other than equidistant sampling?
  • I would ideally prefer if we didn't conflate input signals with parameters, it feels like we make use of parameters for this because that's where this data appeared to "fit" into the code, but a parameter and an input signal are two quite different things.
  • Do we have any metadata (type info or otherwise) available to indicate that certain parameters are actually signals, other than the fact that they are represented as vectors? I'd like this information to be available.
  • How does the user know in which order the parameters are to be passed to the generated functions. In particular, can we make sure that downstream library code is not broken because the order of the parameter arguments changes between versions of MTK?

@ChrisRackauckas
Copy link
Member

Will any aspect of this be user facing? If so, some docs would be welcome.

It shouldn't necessarily be user-facing, no. The exact type that is used by MTK to hold onto parameters shouldn't matter to a user, since they shouldn't be relying on indexing anyways. remake, sol[p], prob[p], etc. which are the documented interface are all agnostic to how we actually store the value.

Does this new functionality accommodate sampling of input signals other than equidistant sampling?

Yes, any interpolation function works, even a custom user one. This change just makes it so you can have any struct as parameters, and you can do anything with the values that you want.

I would ideally prefer if we didn't conflate input signals with parameters, it feels like we make use of parameters for this because that's where this data appeared to "fit" into the code, but a parameter and an input signal are two quite different things.

There is no conflation, just unification. In MTK, every simulatable model must be closed, as in all input signals are defined. This functionality gives the user the ability to define components with "strange" parameters, which makes it easier to close models and have them change things around without recompilation.

Do we have any metadata (type info or otherwise) available to indicate that certain parameters are actually signals, other than the fact that they are represented as vectors? I'd like this information to be available.

The fact that they are vectors does not mean they are input signals. Another common use for this would be an array of parameters like D .* A * x where D is a matrix of local diffusion coefficients D(x) * u_xx. Another use case is neural network components, NN(u,p) has an array p of parameters (well, that can be a whole NamedTuple or ComponentArray, or Array depending on Lux vs Flux etc.). It's a very generally useful feature. Don't pigeonhole this into only being about inputs: that's one very small part of this story and the motivation behind this.

How does the user know in which order the parameters are to be passed to the generated functions. In particular, can we make sure that downstream library code is not broken because the order of the parameter arguments changes between versions of MTK?

Yes, the documented interfaces of remake and prob[p] = ... are stable and do not require the user to directly interact with how the parameters are represented. Note that there will be some changes needed to the setindex! stuff (@YingboMa are you planning to handle that?). But this is a general point with MTK that any time someone directly interacts with the representation of the data rather than the symbolic interface, they are doing so at their own risk. The only thing that can be generally taken as stable are the symbolic interfaces, since otherwise implementation details leak to interface details. You can of course use the representation directly at your own risk as a way to lower overhead as you please, but it's not directly documented nor meant to be stable, and we'd prefer that folks instead contribute back to lower overhead symbolic interfaces rather than hacking around the internals per-package.

And note that even before this you weren't supposed to rely on parameter ordering, since that could also change.

@ChrisRackauckas
Copy link
Member

a mix of single values and no arrays, then

This is getting overly complicated and I can see a few ways to easily break it. Let's keep it simple, it's gotten complex because this is not solving the problem at the root cause. So let's get to one rule: trust that the user didn't lie to you.

How do we do that? We go by the types of the variables. Recall that @variable x is shorthand for @variable x::Real. We can refer to a collection of @variable x=1f0 y=1e0 as [x,y] = [1f0,1e0] which should promote. If people specifically say a type, we should go by that type. @variable z::Int=1 then it should be tagged as an Int, and we should now do ([1f0,1e0],[1]). @variable v::Complex should similarly work, etc.

Now the one thing that we have to figure out then is what to do about @variable x=1. There are three possibilities:

  1. Promote this to @variable x=1.0
  2. Interpret this as @variable x::Int=1
  3. Error, forcing the user to either write @variable x=1.0 or @variable x::Int=1.

We have implicitly did (1) since we did not necessarily have a way to handle integer values well before, and so the implementation essentially "accidentally" handled it through promotion. So (1) is there because it was easy, not because it was a well thought out thing that should be staying. I'd argue that it's probably a bad choice for this context and that either (2) or (3) would be better.

I would actually prefer (3) to make it explicit: using integers as a parameter value in a model is really a choice and has consequences. For example, if you have an integer valued parameter, then you cannot differentiate with respect to it. These live as structural parameter which are somewhat separate from other parameters: for i in 1:N, x[p], etc. If one actually needs an integer value within a model, then it's something rather specific and I think the user should really tell us "yes I want this to be an integer for a reason, and I know the consequences".

Therefore I think we should do @variable x=1 throws an error "Error: variable definition x::Real set a default value with an incompatible type 1 isa Int. Either ensure that the default value satisfies the type constraint (@variable x=1.0), or set the type of the variable to match the default value (@variable x::Int=1)".

Therefore, we can now trust the user's input, and no rule is required. Use the variable as defined. You can look at the types of the variables and know "I have 5 floats, 2 integers, and 2 arrays of floats" and build the tuple from that. It's safe, free of complications, and predictable.

@baggepinnen
Copy link
Contributor

It shouldn't necessarily be user-facing, no.

I take "users" to include downstream library writers, which might need to know how to call the built functions since they are called in different contexts.

This change just makes it so you can have any struct as parameters, and you can do anything with the values that you want.

Nice, I need this!

In MTK, every simulatable model must be closed, as in all input signals are defined.

This is correct, but in my world there are lots of things to do with a model that is not simulation. If the type information is kept, such that an input signal is typed as SampledData etc., this problem is solved and I can easily identify the nature of each parameter.

And note that even before this you weren't supposed to rely on parameter ordering, since that could also change.

I understand this requirement and it's always possible to work with, but users are likely to write code that interfaces with external (to MTK) functionality where the order is important. Currently, the onus is on the user to write transformations between MTK and the rest of the world to make sure all variables have the correct order. Examples of this includes any use of linear algebra, e.g., covariance matrices for state estimators etc., numerical scaling for optimizers, external data, C-code etc. Relying on indexing is never a good solution, but the fact remains that sooner or later you have to pick an order to work with, and right now it's easy to make mistakes somewhere in this pipeline. The currently suggested varmap_to_var is workable but not always ergonomic. Having something similar for the order of the different parameter arguments will be required.

@ChrisRackauckas
Copy link
Member

I think that covers most of the discussion, but let me add one last piece on the end here. What do we do about "inputs" and "unifying inputs"? Let's really take stalk of where this will let us land. Let's say we have a plant model sys which has a few undefined input variables. Such a model is fine for inputs analysis and such, Jacobian w.r.t. inputs for controls functionality, etc. But now let's say we want to fit parameters with respect to sets of inputs, or run the model with different driving inputs, etc.

The unified way to do this is to keep things simple and have only two forms: completed models and uncompleted models. The sys is an uncompleted model, and so certain analysis can be done on the unmatched inputs. To complete the model, you just connect a different component to it. So for example, with @bradcarman's use case, he wants to connect a sdata = SampledData(ts,us) component to sys and now combinedsys is a completed model that has no inputs. This means that simulation works on it, differentiation works on it, etc.

Now let's talk about the extended functionality. Let's say we want to perform parameter estimation using data from 5 different inputs? combinedsys is perfectly fine with that. Since the parameters of SampledData are the interpolation coefficients, you'd just need to prob[sampdata.u] = my_new_values and then re-run. Or use remake(combinedsys,[sampdata.u => my_newvalues]). Not only will this work, this is also the performance preferred form since it will allow for reusing the same model with the new solves and thus avoid more compilation. Thus parameter estimation with JuliaSimModelOptimizer can neatly be defined via 5 Experiments where each just set a different sampdata.u.

Now let's say we want to go ahead and want to drive the system with a bunch of different defined control functions. You can just make a component finput = FunctionInput(f) which takes in an f(p,t) = u and then wraps that in a FunctionWrapper (or FunctionWrappersWrapper for autodiff support) and now connecting FunctionInput(f) with sys makes a combinedsys that's simulatable. You can easily change around the input single by just changing the "parameters" in this form, here prob[finput.f] = newf (we'd need a tie-in to allow this to auto-rewrap to work ergonomically, but that can come in the near future). So then again, driving with different inputs is "just" changing parameters around in a complete model, avoids recompilation, and only requires inhomogeneous parameters support.

Finally, to have a single unified and simple input format that everyone could use, functionality that analyzes models with respect to inputs would either just leave the system open and accept sys and thus internally complete the system, or accept a completed model combinedsys with a definition of what its input components are [combinedsys.finput, combinedsys.sdata]. parameters(combinedsys.finput) will then have all of the information, such as the symbolic type, to know how to sample that kind of parameter or accept definitions from the user for how to sample it. Thus sys is the version with undefined input domains, and (combinedsys, [combinedsys.finput, combinedsys.sdata]) is the version with defined input domains. It would be good to then allow for defining samplable domains over the spaces defined there, etc., which then directly follows the Domains work happening for PDEs.

We have different tooling today wanting those two forms, and this is thus a direct generalization that is flexible and only requires a minimal change to the MTK codegen in order to correctly (i.e. this PR).

@ChrisRackauckas
Copy link
Member

This is correct, but in my world there are lots of things to do with a model that is not simulation. If the type information is kept, such that an input signal is typed as SampledData etc., this problem is solved and I can easily identify the nature of each parameter.

See if my later post clarifies that. Indeed I think the thing that's missing here is type information on subsystems, which @YingboMa we've talked about a bit before and that shouldn't be hard (?).

Currently, the onus is on the user to write transformations between MTK and the rest of the world to make sure all variables have the correct order.

With prob[p], sol[p], remake(sys, p = ...), that shouldn't be the case in most situations that I am aware of. For example, to do linear algebra you can get the parameter vector ordered correctly via prob[parameters(sys)], do modifications, and then put it back in the right order. I think with this change though we will need some way to "get the canonical ordering of the floating point parameters in a flattened form", which is an operation I don't think we currently have.

@baggepinnen
Copy link
Contributor

See if my later post clarifies that.

It does

type information on subsystems

This ties in with the concept of "replacing" subsystems, which requires the replacement to implement the same interface, provided that the connection can be made between a system type and the interfaces it implements.

that shouldn't be the case in most situations that I am aware of. For example, to do linear algebra you can get the parameter vector ordered correctly via prob[parameters(sys)]

prob, sol, remake are all "our" functions and types, whereas I as a user might interface some C-code I have written, and this code makes assumption about state realizations and order. I also read data from disk, and have to provide the mapping from data order to MTK order. I scale the output of MTK-generated functions before I pass them to Ipopt for numerical stability, and my scalings need to be in the correct order, and so on. As long as users live in our walled garden, this is not much of an issue, but unless we have a 100% of all batteries included, the user will have to stray outside of this walled garden and have to deal with these details.

I think with this change though we will need some way to "get the canonical ordering of the floating point parameters in a flattened form", which is an operation I don't think we currently have.

exactly this is needed

@YingboMa
Copy link
Member Author

I added a observables test at the end of test/split_parameters.jl. I'm struggling to figure out how observables need to be modified to work with the new Tuple parameter type. Can you have a look?

The unbound_inputs function in split_parameters.jl is undefined. Do you have an MWE regarding the observed functions issue?

@bradcarman
Copy link
Contributor

So, just to clarify @ChrisRackauckas idea, if we have the following system...

vars = @variables x::Int=1
pars = @parameters A::Vector{Real}=[1.0, 2.0, 3.0]
@named sys = ODESystem(Equation[], t, vars, pars)
prob = ODEProblem(sys)

If one tries to do remake(prob; u0=[sys.x => 1.0], p=[sys.A => 1.0]) , this would throw an error, that says something like

Error: attempted to set sys.x to a Float, can only accept Int
Error: attempted to set sys.A to a Float, can only accept Vector{Real}

Is that right?

@bradcarman
Copy link
Contributor

bradcarman commented Sep 7, 2023

The unbound_inputs function in split_parameters.jl is undefined. Do you have an MWE regarding the observed functions issue?

@YingboMa That code wasn't supposed to be there, all fixed, last line should now give an error while getting an observable.

@YingboMa YingboMa merged commit 736520a into master Sep 14, 2023
26 of 34 checks passed
@YingboMa YingboMa deleted the myb/ps branch September 14, 2023 20:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants