Skip to content

Commit

Permalink
Merge pull request #30 from SciML/decrease_condition_renames
Browse files Browse the repository at this point in the history
Renaming decrease conditions and fixing documentation
  • Loading branch information
nicholaskl97 authored Sep 11, 2024
2 parents 24d9409 + a12ed2c commit 6e24f18
Show file tree
Hide file tree
Showing 20 changed files with 143 additions and 131 deletions.
5 changes: 3 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ SciMLBase = "2"
julia = "1.10"

[extras]
Boltz = "4544d5e4-abc5-4dea-817f-29e4c205d9c8"
CSDP = "0a46da34-8e4b-519e-b418-48813639ff34"
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
NLopt = "76087f3c-5699-56af-9a33-bf431cd00edd"
Expand All @@ -34,7 +36,6 @@ OptimizationOptimisers = "42dfb2eb-d2b4-4451-abcd-913932933ac1"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
SafeTestsets = "1bc83da4-3b8d-516f-aca4-4fe02f6d838f"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
CSDP = "0a46da34-8e4b-519e-b418-48813639ff34"

[targets]
test = ["SafeTestsets", "Test", "Lux", "Optimization", "OptimizationOptimJL", "OptimizationOptimisers", "NLopt", "Random", "NeuralPDE", "DifferentialEquations", "CSDP"]
test = ["SafeTestsets", "Test", "Lux", "Optimization", "OptimizationOptimJL", "OptimizationOptimisers", "NLopt", "Random", "NeuralPDE", "DifferentialEquations", "CSDP", "Boltz"]
1 change: 1 addition & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
[deps]
Boltz = "4544d5e4-abc5-4dea-817f-29e4c205d9c8"
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
Expand Down
16 changes: 6 additions & 10 deletions docs/src/demos/damped_SHO.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ structure = NonnegativeNeuralLyapunov(
minimization_condition = DontCheckNonnegativity(check_fixed_point = true)

# Define Lyapunov decrease condition
# Damped SHO has exponential decrease at a rate of k = ζ * ω_0, so we train to certify that
decrease_condition = ExponentialDecrease(prod(p))
# Damped SHO has exponential stability at a rate of k = ζ * ω_0, so we train to certify that
decrease_condition = ExponentialStability(prod(p))

# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
Expand All @@ -88,8 +88,6 @@ prob = discretize(pde_system, discretization)
########################## Solve OptimizationProblem ##########################

res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 500)
prob = Optimization.remake(prob, u0 = res.u)
res = Optimization.solve(prob, OptimizationOptimJL.BFGS(); maxiters = 500)

###################### Get numerical numerical functions ######################
net = discretization.phi
Expand Down Expand Up @@ -165,7 +163,7 @@ which structurally enforces nonnegativity, but doesn't guarantee ``V([0, 0]) = 0
We therefore don't need a term in the loss function enforcing ``V(x) > 0 \, \forall x \ne 0``, but we do need something enforcing ``V([0, 0]) = 0``.
So, we use [`DontCheckNonnegativity(check_fixed_point = true)`](@ref).

To train for exponential decrease we use [`ExponentialDecrease`](@ref), but we must specify the rate of exponential decrease, which we know in this case to be ``\zeta \omega_0``.
To train for exponential stability we use [`ExponentialStability`](@ref), but we must specify the rate of exponential decrease, which we know in this case to be ``\zeta \omega_0``.

```@example SHO
using NeuralLyapunov
Expand All @@ -178,8 +176,8 @@ structure = NonnegativeNeuralLyapunov(
minimization_condition = DontCheckNonnegativity(check_fixed_point = true)
# Define Lyapunov decrease condition
# Damped SHO has exponential decrease at a rate of k = ζ * ω_0, so we train to certify that
decrease_condition = ExponentialDecrease(prod(p))
# Damped SHO has exponential stability at a rate of k = ζ * ω_0, so we train to certify that
decrease_condition = ExponentialStability(prod(p))
# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
Expand All @@ -206,8 +204,6 @@ prob = discretize(pde_system, discretization)
using Optimization, OptimizationOptimisers, OptimizationOptimJL
res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 500)
prob = Optimization.remake(prob, u0 = res.u)
res = Optimization.solve(prob, OptimizationOptimJL.BFGS(); maxiters = 500)
net = discretization.phi
θ = res.u.depvar
Expand Down Expand Up @@ -259,7 +255,7 @@ println(
)
```

At least at these validation samples, the conditions that ``\dot{V}`` be negative semi-definite and ``V`` be minimized at the origin are nearly sastisfied.
At least at these validation samples, the conditions that ``\dot{V}`` be negative semi-definite and ``V`` be minimized at the origin are nearly satisfied.

```@example SHO
using Plots
Expand Down
14 changes: 7 additions & 7 deletions docs/src/demos/policy_search.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ We'll jointly train a neural controller ``\tau = u \left( \theta, \frac{d\theta}
## Copy-Pastable Code

```julia
using NeuralPDE, Lux, ModelingToolkit, NeuralLyapunov
using NeuralPDE, Lux, Boltz, ModelingToolkit, NeuralLyapunov
import Optimization, OptimizationOptimisers, OptimizationOptimJL
using Random

Expand Down Expand Up @@ -55,7 +55,7 @@ dim_phi = 3
dim_u = 1
dim_output = dim_phi + dim_u
chain = [Lux.Chain(
PeriodicEmbedding([1], [2π]),
Boltz.Layers.PeriodicEmbedding([1], [2π]),
Dense(3, dim_hidden, tanh),
Dense(dim_hidden, dim_hidden, tanh),
Dense(dim_hidden, 1)
Expand All @@ -81,7 +81,7 @@ structure = add_policy_search(
minimization_condition = DontCheckNonnegativity(check_fixed_point = false)

# Define Lyapunov decrease condition
decrease_condition = AsymptoticDecrease(strict = true)
decrease_condition = AsymptoticStability()

# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
Expand Down Expand Up @@ -179,7 +179,7 @@ Other than that, setting up the neural network using Lux and NeuralPDE training
For more on that aspect, see the [NeuralPDE documentation](https://docs.sciml.ai/NeuralPDE/stable/).

```@example policy_search
using Lux
using Lux, Boltz
# Define neural network discretization
# We use an input layer that is periodic with period 2π with respect to θ
Expand All @@ -189,7 +189,7 @@ dim_phi = 3
dim_u = 1
dim_output = dim_phi + dim_u
chain = [Lux.Chain(
PeriodicEmbedding([1], [2π]),
Boltz.Layers.PeriodicEmbedding([1], [2π]),
Dense(3, dim_hidden, tanh),
Dense(dim_hidden, dim_hidden, tanh),
Dense(dim_hidden, 1)
Expand Down Expand Up @@ -243,7 +243,7 @@ Since our Lyapunov candidate structurally enforces positive definiteness, we use
minimization_condition = DontCheckNonnegativity(check_fixed_point = false)
# Define Lyapunov decrease condition
decrease_condition = AsymptoticDecrease(strict = true)
decrease_condition = AsymptoticStability()
# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
Expand Down Expand Up @@ -384,7 +384,7 @@ Now, let's simulate the closed-loop dynamics to verify that the controller can g
First, we'll start at the downward equilibrium:

```@example policy_search
state_order = map(st -> SymbolicUtils.iscall(st) ? operation(st) : st, state_order)
state_order = map(st -> SymbolicUtils.isterm(st) ? operation(st) : st, state_order)
state_syms = Symbol.(state_order)
closed_loop_dynamics = ODEFunction(
Expand Down
6 changes: 3 additions & 3 deletions docs/src/demos/roa_estimation.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ structure = PositiveSemiDefiniteStructure(dim_output)
minimization_condition = DontCheckNonnegativity()

# Define Lyapunov decrease condition
decrease_condition = make_RoA_aware(AsymptoticDecrease(strict = true))
decrease_condition = make_RoA_aware(AsymptoticStability())

# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
Expand Down Expand Up @@ -151,7 +151,7 @@ V(x) = \left( 1 + \lVert \phi(x) \rVert^2 \right) \log \left( 1 + \lVert x \rVer
which structurally enforces positive definiteness.
We therefore use [`DontCheckNonnegativity()`](@ref).

We only require asymptotic decrease in this example, but we use [`make_RoA_aware`](@ref) to only penalize positive values of ``\dot{V}(x)`` when ``V(x) \le 1``.
We only require asymptotic stability in this example, but we use [`make_RoA_aware`](@ref) to only penalize positive values of ``\dot{V}(x)`` when ``V(x) \le 1``.

```@example RoA
using NeuralLyapunov
Expand All @@ -161,7 +161,7 @@ structure = PositiveSemiDefiniteStructure(dim_output)
minimization_condition = DontCheckNonnegativity()
# Define Lyapunov decrease condition
decrease_condition = make_RoA_aware(AsymptoticDecrease(strict = true))
decrease_condition = make_RoA_aware(AsymptoticStability())
# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
Expand Down
5 changes: 3 additions & 2 deletions docs/src/man/decrease.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,9 @@ LyapunovDecreaseCondition
## Pre-defined decrease conditions

```@docs
AsymptoticDecrease
ExponentialDecrease
AsymptoticStability
ExponentialStability
StabilityISL
DontCheckDecrease
```

Expand Down
2 changes: 1 addition & 1 deletion docs/src/man/policy_search.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Policy Search and Network-Sependent Dynamics
# Policy Search and Network-Dependent Dynamics

At times, we wish to model a component of the dynamics with a neural network.
A common example is the policy search case, when the closed-loop dynamics include a neural network controller.
Expand Down
3 changes: 2 additions & 1 deletion src/NeuralLyapunov.jl
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ export LyapunovMinimizationCondition, StrictlyPositiveDefinite, PositiveSemiDefi
DontCheckNonnegativity

# Decrease conditions
export LyapunovDecreaseCondition, AsymptoticDecrease, ExponentialDecrease, DontCheckDecrease
export LyapunovDecreaseCondition, StabilityISL, AsymptoticStability, ExponentialStability,
DontCheckDecrease

# Setting up the PDESystem for NeuralPDE
export NeuralLyapunovSpecification, NeuralLyapunovPDESystem
Expand Down
24 changes: 12 additions & 12 deletions src/NeuralLyapunovPDESystem.jl
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ function NeuralLyapunovPDESystem(
p_syms = if isnothing(dynamics.sys.parameters)
[]
else
dynamics.sys.parameters
keys(dynamics.sys.parameters)
end
(s_syms, p_syms)
else
Expand Down Expand Up @@ -202,7 +202,7 @@ function _NeuralLyapunovPDESystem(
)::PDESystem
########################## Unpack specifications ##########################
structure = spec.structure
minimzation_condition = spec.minimzation_condition
minimization_condition = spec.minimization_condition
decrease_condition = spec.decrease_condition
f_call = structure.f_call
state_dim = length(domains)
Expand All @@ -215,11 +215,11 @@ function _NeuralLyapunovPDESystem(
# φ(x) is the symbolic form of neural network output
φ(x) = Num.([φi(x...) for φi in net])

# V_sym(x) is the symobolic form of the Lyapunov function
V_sym(x) = structure.V(φ, x, fixed_point)
# V(x) is the symobolic form of the Lyapunov function
V(x) = structure.V(φ, x, fixed_point)

# V̇_sym(x) is the symbolic time derivative of the Lyapunov function
function V̇_sym(x)
# (x) is the symbolic time derivative of the Lyapunov function
function (x)
structure.(
φ,
y -> Symbolics.jacobian(φ(y), y),
Expand All @@ -234,20 +234,20 @@ function _NeuralLyapunovPDESystem(
################ Define equations and boundary conditions #################
eqs = []

if check_nonnegativity(minimzation_condition)
cond = get_minimization_condition(minimzation_condition)
push!(eqs, cond(V_sym, state, fixed_point) ~ 0.0)
if check_nonnegativity(minimization_condition)
cond = get_minimization_condition(minimization_condition)
push!(eqs, cond(V, state, fixed_point) ~ 0.0)
end

if check_decrease(decrease_condition)
cond = get_decrease_condition(decrease_condition)
push!(eqs, cond(V_sym, V̇_sym, state, fixed_point) ~ 0.0)
push!(eqs, cond(V, V̇, state, fixed_point) ~ 0.0)
end

bcs = []

if check_minimal_fixed_point(minimzation_condition)
push!(bcs, V_sym(fixed_point) ~ 0.0)
if check_minimal_fixed_point(minimization_condition)
push!(bcs, V(fixed_point) ~ 0.0)
end

if policy_search
Expand Down
14 changes: 7 additions & 7 deletions src/conditions_specification.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ structurally enforcing some Lyapunov conditions.
`state`.
- `V̇(phi::Function, J_phi::Function, dynamics::Function, state, params, t, fixed_point)`:
outputs the time derivative of the Lyapunov function at `state`.
- `f_call(dynamics::Function, phi::Function, state, p, t)`: outputs the derivative of the
state; this is useful for making closed-loop dynamics which depend on the neural
- `f_call(dynamics::Function, phi::Function, state, params, t)`: outputs the derivative of
the state; this is useful for making closed-loop dynamics which depend on the neural
network, such as in the policy search case.
- `network_dim`: the dimension of the output of the neural network.
Expand Down Expand Up @@ -60,7 +60,7 @@ Specifies a neural Lyapunov problem.
"""
struct NeuralLyapunovSpecification
structure::NeuralLyapunovStructure
minimzation_condition::AbstractLyapunovMinimizationCondition
minimization_condition::AbstractLyapunovMinimizationCondition
decrease_condition::AbstractLyapunovDecreaseCondition
end

Expand Down Expand Up @@ -97,8 +97,8 @@ Note that the first input, ``V``, is a function, so the minimization condition c
the value of the candidate Lyapunov function at multiple points.
"""
function get_minimization_condition(cond::AbstractLyapunovMinimizationCondition)
error("get_condition not implemented for AbstractLyapunovMinimizationCondition of " *
"type $(typeof(cond))")
error("get_minimization_condition not implemented for " *
"AbstractLyapunovMinimizationCondition of type $(typeof(cond))")
end

"""
Expand All @@ -122,6 +122,6 @@ Note that the first two inputs, ``V`` and ``V̇``, are functions, so the decreas
can depend on the value of these functions at multiple points.
"""
function get_decrease_condition(cond::AbstractLyapunovDecreaseCondition)
error("get_condition not implemented for AbstractLyapunovDecreaseCondition of type " *
string((typeof(cond))))
error("get_decrease_condition not implemented for AbstractLyapunovDecreaseCondition " *
"of type $(typeof(cond))")
end
Loading

0 comments on commit 6e24f18

Please sign in to comment.