Skip to content

Commit

Permalink
Merge pull request #28 from SciML/documentation
Browse files Browse the repository at this point in the history
Wrote documentation website
  • Loading branch information
ChrisRackauckas authored Jun 13, 2024
2 parents f93a351 + fc41720 commit ab3f142
Show file tree
Hide file tree
Showing 31 changed files with 1,753 additions and 387 deletions.
18 changes: 18 additions & 0 deletions .github/workflows/Documentation.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
name: "Documentation"

on:
push:
branches:
- master
tags: '*'
pull_request:

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ github.ref_name != github.event.repository.default_branch || github.ref != 'refs/tags/v*' }}

jobs:
build-and-deploy-docs:
name: "Documentation"
uses: "SciML/.github/.github/workflows/documentation.yml@v1"
secrets: "inherit"
8 changes: 8 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
[deps]
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
ModelingToolkit = "961ee093-0014-501f-94e3-6117800e7a78"
NeuralLyapunov = "61252e1c-87f0-426a-93a7-3cdb1ed5a867"
NeuralPDE = "315f7962-48a3-4962-8226-d0f33b1235f0"
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e"
OptimizationOptimisers = "42dfb2eb-d2b4-4451-abcd-913932933ac1"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
20 changes: 19 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,22 @@ using Documenter
DocMeta.setdocmeta!(
NeuralLyapunov, :DocTestSetup, :(using NeuralLyapunov); recursive = true)

MANUAL_PAGES = [
"man.md",
"man/pdesystem.md",
"man/minimization.md",
"man/decrease.md",
"man/structure.md",
"man/roa.md",
"man/policy_search.md",
"man/local_lyapunov.md"
]
DEMONSTRATION_PAGES = [
"demos/damped_SHO.md",
"demos/roa_estimation.md",
"demos/policy_search.md"
]

makedocs(;
modules = [NeuralLyapunov],
authors = "Nicholas Klugman <[email protected]> and contributors",
Expand All @@ -14,7 +30,9 @@ makedocs(;
assets = String[]
),
pages = [
"Home" => "index.md"
"Home" => "index.md",
"Manual" => vcat(MANUAL_PAGES, hide("man/internals.md")),
"Demonstrations" => DEMONSTRATION_PAGES
]
)

Expand Down
282 changes: 282 additions & 0 deletions docs/src/demos/damped_SHO.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,282 @@
# Damped Simple Harmonic Oscillator

Let's train a neural network to prove the exponential stability of the damped simple harmonic oscillator (SHO).

The damped SHO is represented by the system of differential equations
```math
\begin{align*}
\frac{dx}{dt} &= v \\
\frac{dv}{dt} &= -2 \zeta \omega_0 v - \omega_0^2 x
\end{align*}
```
where ``x`` is the position, ``v`` is the velocity, ``t`` is time, and ``\zeta, \omega_0`` are parameters.

We'll consider just the box domain ``x \in [-5, 5], v \in [-2, 2]``.

## Copy-Pastable Code

```julia
using NeuralPDE, Lux, NeuralLyapunov
using Optimization, OptimizationOptimisers, OptimizationOptimJL
using Random

Random.seed!(200)

######################### Define dynamics and domain ##########################

"Simple Harmonic Oscillator Dynamics"
function f(state, p, t)
ζ, ω_0 = p
pos = state[1]
vel = state[2]
vcat(vel, -2ζ * ω_0 * vel - ω_0^2 * pos)
end
lb = [-5.0, -2.0];
ub = [ 5.0, 2.0];
p = [0.5, 1.0];
fixed_point = [0.0, 0.0];
dynamics = ODEFunction(f; sys = SciMLBase.SymbolCache([:x, :v], [, :ω_0]))

####################### Specify neural Lyapunov problem #######################

# Define neural network discretization
dim_state = length(lb)
dim_hidden = 10
dim_output = 3
chain = [Lux.Chain(
Dense(dim_state, dim_hidden, tanh),
Dense(dim_hidden, dim_hidden, tanh),
Dense(dim_hidden, 1)
) for _ in 1:dim_output]

# Define training strategy
strategy = QuasiRandomTraining(1000)
discretization = PhysicsInformedNN(chain, strategy)

# Define neural Lyapunov structure
structure = NonnegativeNeuralLyapunov(
dim_output;
δ = 1e-6
)
minimization_condition = DontCheckNonnegativity(check_fixed_point = true)

# Define Lyapunov decrease condition
# Damped SHO has exponential decrease at a rate of k = ζ * ω_0, so we train to certify that
decrease_condition = ExponentialDecrease(prod(p))

# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
structure,
minimization_condition,
decrease_condition
)

############################# Construct PDESystem #############################

@named pde_system = NeuralLyapunovPDESystem(
dynamics,
lb,
ub,
spec;
p = p
)

######################## Construct OptimizationProblem ########################

prob = discretize(pde_system, discretization)

########################## Solve OptimizationProblem ##########################

res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 500)
prob = Optimization.remake(prob, u0 = res.u)
res = Optimization.solve(prob, OptimizationOptimJL.BFGS(); maxiters = 500)

###################### Get numerical numerical functions ######################
net = discretization.phi
θ = res.u.depvar

V, V̇ = get_numerical_lyapunov_function(
net,
θ,
structure,
f,
fixed_point;
p = p
)
```

## Detailed description

In this example, we set the dynamics up as an `ODEFunction` and use a `SciMLBase.SymbolCache` to tell the ultimate `PDESystem` what to call our state and parameter variables.

```@setup SHO
using Random
Random.seed!(200)
```

```@example SHO
using NeuralPDE # for ODEFunction and SciMLBase.SymbolCache
"Simple Harmonic Oscillator Dynamics"
function f(state, p, t)
ζ, ω_0 = p
pos = state[1]
vel = state[2]
vcat(vel, -2ζ * ω_0 * vel - ω_0^2 * pos)
end
lb = [-5.0, -2.0];
ub = [ 5.0, 2.0];
p = [0.5, 1.0];
fixed_point = [0.0, 0.0];
dynamics = ODEFunction(f; sys = SciMLBase.SymbolCache([:x, :v], [:ζ, :ω_0]))
```

Setting up the neural network using Lux and NeuralPDE training strategy is no different from any other physics-informed neural network problem.
For more on that aspect, see the [NeuralPDE documentation](https://docs.sciml.ai/NeuralPDE/stable/).

```@example SHO
using Lux
# Define neural network discretization
dim_state = length(lb)
dim_hidden = 10
dim_output = 3
chain = [Lux.Chain(
Dense(dim_state, dim_hidden, tanh),
Dense(dim_hidden, dim_hidden, tanh),
Dense(dim_hidden, 1)
) for _ in 1:dim_output]
```

```@example SHO
# Define training strategy
strategy = QuasiRandomTraining(1000)
discretization = PhysicsInformedNN(chain, strategy)
```

We now define our Lyapunov candidate structure along with the form of the Lyapunov conditions we'll be using.

For this example, let's use a Lyapunov candidate
```math
V(x) = \lVert \phi(x) \rVert^2 + \delta \log \left( 1 + \lVert x \rVert^2 \right),
```
which structurally enforces nonnegativity, but doesn't guarantee ``V([0, 0]) = 0``.
We therefore don't need a term in the loss function enforcing ``V(x) > 0 \, \forall x \ne 0``, but we do need something enforcing ``V([0, 0]) = 0``.
So, we use [`DontCheckNonnegativity(check_fixed_point = true)`](@ref).

To train for exponential decrease we use [`ExponentialDecrease`](@ref), but we must specify the rate of exponential decrease, which we know in this case to be ``\zeta \omega_0``.

```@example SHO
using NeuralLyapunov
# Define neural Lyapunov structure
structure = NonnegativeNeuralLyapunov(
dim_output;
δ = 1e-6
)
minimization_condition = DontCheckNonnegativity(check_fixed_point = true)
# Define Lyapunov decrease condition
# Damped SHO has exponential decrease at a rate of k = ζ * ω_0, so we train to certify that
decrease_condition = ExponentialDecrease(prod(p))
# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
structure,
minimization_condition,
decrease_condition
)
# Construct PDESystem
@named pde_system = NeuralLyapunovPDESystem(
dynamics,
lb,
ub,
spec;
p = p
)
```

Now, we solve the PDESystem using NeuralPDE the same way we would any PINN problem.

```@example SHO
prob = discretize(pde_system, discretization)
using Optimization, OptimizationOptimisers, OptimizationOptimJL
res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 500)
prob = Optimization.remake(prob, u0 = res.u)
res = Optimization.solve(prob, OptimizationOptimJL.BFGS(); maxiters = 500)
net = discretization.phi
θ = res.u.depvar
```

We can use the result of the optimization problem to build the Lyapunov candidate as a Julia function.

```@example SHO
V, V̇ = get_numerical_lyapunov_function(
net,
θ,
structure,
f,
fixed_point;
p = p
)
```

Now let's see how we did.
We'll evaluate both ``V`` and ``\dot{V}`` on a ``101 \times 101`` grid:

```@example SHO
Δx = (ub[1] - lb[1]) / 100
Δv = (ub[2] - lb[2]) / 100
xs = lb[1]:Δx:ub[1]
vs = lb[2]:Δv:ub[2]
states = Iterators.map(collect, Iterators.product(xs, vs))
V_samples = vec(V(hcat(states...)))
V̇_samples = vec(V̇(hcat(states...)))
# Print statistics
V_min, i_min = findmin(V_samples)
state_min = collect(states)[i_min]
V_min, state_min = if V(fixed_point) ≤ V_min
V(fixed_point), fixed_point
else
V_min, state_min
end
println("V(0.,0.) = ", V(fixed_point))
println("V ∋ [", V_min, ", ", maximum(V_samples), "]")
println("Minimial sample of V is at ", state_min)
println(
"V̇ ∋ [",
minimum(V̇_samples),
", ",
max(V̇(fixed_point), maximum(V̇_samples)),
"]",
)
```

At least at these validation samples, the conditions that ``\dot{V}`` be negative semi-definite and ``V`` be minimized at the origin are nearly sastisfied.

```@example SHO
using Plots
p1 = plot(xs, vs, V_samples, linetype = :contourf, title = "V", xlabel = "x", ylabel = "v");
p1 = scatter!([0], [0], label = "Equilibrium");
p2 = plot(
xs,
vs,
V̇_samples,
linetype = :contourf,
title = "V̇",
xlabel = "x",
ylabel = "v",
);
p2 = scatter!([0], [0], label = "Equilibrium");
plot(p1, p2)
```

Each sublevel set of ``V`` completely contained in the plot above has been verified as a subset of the region of attraction.
Loading

0 comments on commit ab3f142

Please sign in to comment.