Skip to content

Commit

Permalink
A bug with no noise applied (#76)
Browse files Browse the repository at this point in the history
* specifying the physical coefficients in the examples

* fixed a bug in Dataset class which prevented noise

* Dealing with normalization and history

* adding plotting training history functionality

* Library1D treats theta output conisistently with ODE example plus comments, stylistic choices

* adding more examples

* applying `black src/deepymod` for formatting
  • Loading branch information
georgemilosh authored Oct 30, 2023
1 parent 47c3366 commit 8e68325
Show file tree
Hide file tree
Showing 35 changed files with 10,336 additions and 325 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,5 @@ site/
.eggs/
*events.out.tfevents.*
*.pt
*.pdf
*.png
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@
--------------------------------------------------------------------------------
<img alt="PyPI" src="https://img.shields.io/pypi/v/deepymod?style=flat-square">

DeePyMoD is a modular framework for model discovery of PDEs and ODEs from noise data. The framework is comprised of four components, that can seperately be altered: i) A function approximator to construct a surrogate of the data, ii) a function to construct the library of features, iii) a sparse regression algorithm to select the active components from the feature library and iv) a constraint on the function approximator, based on the active components.
DeePyMoD is a modular framework for model discovery of PDEs and ODEs from noise data. The framework is comprised of four components, that can separately be altered: i) A function approximator to construct a surrogate of the data, ii) a function to construct the library of features, iii) a sparse regression algorithm to select the active components from the feature library and iv) a constraint on the function approximator, based on the active components.

![Screenshot](docs/figures/framework.png)

More information can be found in the following two papers: , [arXiv:2011.04336](https://arxiv.org/abs/2011.04336), [arXiv:1904.09406](http://arxiv.org/abs/1904.09406) and the full documentation is availeble on [phimal.github.io/DeePyMoD/](https://phimal.github.io/DeePyMoD/).
More information can be found in the following two papers: , [arXiv:2011.04336](https://arxiv.org/abs/2011.04336), [arXiv:1904.09406](http://arxiv.org/abs/1904.09406) and the full documentation is available on [phimal.github.io/DeePyMoD/](https://phimal.github.io/DeePyMoD/).

**What's the use case?** Classical Model Discovery methods struggle with elevated noise levels and sparse datasets due the low accuracy of numerical differentiation. DeepMoD can handle high noise and sparse datasets, making it well suited for model discovery on actual experimental data.

Expand All @@ -16,7 +16,7 @@ More information can be found in the following two papers: , [arXiv:2011.04336](

## Dependencies and CUDA
We support Python 3.6, 3.7 and 3.8.
We rely on the following packages, they will be installed in the pip installation proces for you:
We rely on the following packages, they will be installed in the pip installation procces for you:
``` numpy, torch, sklearn, pysindy, natsort, tensorboard, matplotlib```


Expand Down Expand Up @@ -44,13 +44,13 @@ and then install it from the cloned `DeePyMoD` directory using

# Features

* **Many example notebooks** We have implemented a varyity of examples ranging from 2D Advection Diffusion, Burgers' equation to non-linear, higher order ODE's If you miss any example, don't hesitate to give us a heads-up.
* **Many example notebooks** We have implemented a variety of examples ranging from 2D Advection Diffusion, Burgers' equation to non-linear, higher order ODE's If you miss any example, don't hesitate to give us a heads-up.

* **Extendable** DeePyMoD is designed to be easily extendable and modifiable. You can simply plug in your own cost function, library or training regime.

* **Automatic library** The library and coefficient vectors are automatically constructed from the maximum order of polynomial and differentiation. If that doesn't cut it for your use case, it's easy to plug in your own library function.

* **Extensive logging** We provide a simple command line logger to see how training is going and an extensive custom Tensorboard logger.

* **Fast** Depending on the size of the data-set DeepMoD, running a model search with DeepMoD takes of the order of minutes/ tens of minutes on a standard CPU. Running the code on GPU's drastically improves performence.
* **Fast** Depending on the size of the data-set DeepMoD, running a model search with DeepMoD takes of the order of minutes/ tens of minutes on a standard CPU. Running the code on GPU's drastically improves performance.

639 changes: 639 additions & 0 deletions examples/ODE_2DOF_sparsity.ipynb

Large diffs are not rendered by default.

546 changes: 421 additions & 125 deletions examples/ODE_Example_coupled_nonlin.ipynb

Large diffs are not rendered by default.

941 changes: 941 additions & 0 deletions examples/ODE_Example_coupled_nonlin_norm.ipynb

Large diffs are not rendered by default.

416 changes: 416 additions & 0 deletions examples/ODE_Lotka_Voltera.ipynb

Large diffs are not rendered by default.

444 changes: 444 additions & 0 deletions examples/ODE_Lotka_Voltera_unnorm.ipynb

Large diffs are not rendered by default.

614 changes: 614 additions & 0 deletions examples/PDE_2D_Advection-Diffusio_noisy.ipynb

Large diffs are not rendered by default.

246 changes: 205 additions & 41 deletions examples/PDE_2D_Advection-Diffusion.ipynb

Large diffs are not rendered by default.

432 changes: 432 additions & 0 deletions examples/PDE_2D_Reaction_Diffusion.ipynb

Large diffs are not rendered by default.

408 changes: 408 additions & 0 deletions examples/PDE_Allen_Cahn.ipynb

Large diffs are not rendered by default.

396 changes: 396 additions & 0 deletions examples/PDE_Allen_Cahn_noisy.ipynb

Large diffs are not rendered by default.

173 changes: 129 additions & 44 deletions examples/PDE_Burgers.ipynb

Large diffs are not rendered by default.

503 changes: 503 additions & 0 deletions examples/PDE_Burgers_Sine.ipynb

Large diffs are not rendered by default.

511 changes: 511 additions & 0 deletions examples/PDE_Burgers_noise.ipynb

Large diffs are not rendered by default.

389 changes: 389 additions & 0 deletions examples/PDE_Cahn_Hilliard_Sine.ipynb

Large diffs are not rendered by default.

383 changes: 383 additions & 0 deletions examples/PDE_Chafee_Infante.ipynb

Large diffs are not rendered by default.

234 changes: 189 additions & 45 deletions examples/PDE_KdV.ipynb

Large diffs are not rendered by default.

541 changes: 541 additions & 0 deletions examples/PDE_KdV_noisy.ipynb

Large diffs are not rendered by default.

550 changes: 550 additions & 0 deletions examples/PDE_Kuramoto_Sivashinsky.ipynb

Large diffs are not rendered by default.

403 changes: 403 additions & 0 deletions examples/PDE_Kuramoto_Sivashinsky_Chaotic.ipynb

Large diffs are not rendered by default.

379 changes: 379 additions & 0 deletions examples/PDE_Kuramoto_Sivashinsky_Cos.ipynb

Large diffs are not rendered by default.

550 changes: 550 additions & 0 deletions examples/PDE_Kuramoto_Sivashinsky_noise.ipynb

Large diffs are not rendered by default.

622 changes: 622 additions & 0 deletions examples/PDE_keller_segel.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ install_requires = numpy
# The usage of test_requires is discouraged, see `Dependency Management` docs
# tests_require = pytest; pytest-cov
# Require a specific Python version, e.g. Python 2.7 or >= 3.4
python_requires = >=3.6.*, !=3.9.*
python_requires = >=3.6, !=3.9
[options.packages.find]
where = src
exclude =
Expand Down
2 changes: 2 additions & 0 deletions src/deepymod/analysis/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@
-----
load_tensorboard convert the tensorboard files into a Pandas DataFrame.
plot_history plot the training history of the model.
"""
from .load_tensorboard import load_tensorboard
from .load_tensorboard import plot_history
150 changes: 149 additions & 1 deletion src/deepymod/analysis/load_tensorboard.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,154 @@ def load_tensorboard(path: str) -> pd.DataFrame:
tags = [path[0].split("/")[-1]]

for idx, tag in enumerate(tags):
df[tag] = data[idx]
try:
df[tag] = data[idx]
except ValueError: # more debugging info
print(
f"Warning: Either the {tag = } of `df` or {idx = } of `data` do not exist! Check for pre-existing saved files. "
)
df.index = steps
return df


def plot_history(foldername: str):
"""Plots the training history of the model."""
history = load_tensorboard(foldername)
fig, axs = plt.subplots(1, 4, figsize=(20, 5))

for history_key in history.keys():
history_key_parts = history_key.split("_")
if history_key_parts[0] == "loss":
if history_key_parts[-1] == "0":
axs[0].semilogy(
history[history_key],
label=history_key_parts[1] + "_" + history_key_parts[-1],
linestyle="--",
)
elif history_key_parts[-1] == "1":
axs[0].semilogy(
history[history_key],
label=history_key_parts[1] + "_" + history_key_parts[-1],
linestyle=":",
)
else:
axs[0].semilogy(
history[history_key],
label=history_key_parts[1] + "_" + history_key_parts[-1],
linestyle="-",
)
if history_key_parts[0] == "remaining":
axs[0].semilogy(
history[history_key],
label=history_key_parts[1]
+ "_"
+ history_key_parts[3]
+ "_"
+ history_key_parts[4],
linestyle="-.",
)
if history_key_parts[0] == "coeffs":
if history_key_parts[2] == "0":
axs[1].plot(
history[history_key],
label=history_key_parts[2]
+ "_"
+ history_key_parts[3]
+ "_"
+ history_key_parts[4],
linestyle="--",
)
elif history_key_parts[2] == "1":
axs[1].plot(
history[history_key],
label=history_key_parts[2]
+ "_"
+ history_key_parts[3]
+ "_"
+ history_key_parts[4],
linestyle=":",
)
else:
axs[1].plot(
history[history_key],
label=history_key_parts[2]
+ "_"
+ history_key_parts[3]
+ "_"
+ history_key_parts[4],
linestyle="-",
)
if history_key_parts[0] == "unscaled":
if history_key_parts[3] == "0":
axs[2].plot(
history[history_key],
label=history_key_parts[3]
+ "_"
+ history_key_parts[4]
+ "_"
+ history_key_parts[5],
linestyle="--",
)
elif history_key_parts[3] == "1":
axs[2].plot(
history[history_key],
label=history_key_parts[3]
+ "_"
+ history_key_parts[4]
+ "_"
+ history_key_parts[5],
linestyle=":",
)
else:
axs[2].plot(
history[history_key],
label=history_key_parts[3]
+ "_"
+ history_key_parts[4]
+ "_"
+ history_key_parts[5],
linestyle="-",
)
if history_key_parts[0] == "estimator":
if history_key_parts[3] == "0":
axs[3].plot(
history[history_key],
label=history_key_parts[3]
+ "_"
+ history_key_parts[4]
+ "_"
+ history_key_parts[5],
linestyle="--",
)
elif history_key_parts[3] == "1":
axs[3].plot(
history[history_key],
label=history_key_parts[3]
+ "_"
+ history_key_parts[4]
+ "_"
+ history_key_parts[5],
linestyle=":",
)
else:
axs[3].plot(
history[history_key],
label=history_key_parts[3]
+ "_"
+ history_key_parts[4]
+ "_"
+ history_key_parts[5],
linestyle="-",
)

# axs[0].set_ylim([-2, 2])
axs[1].set_ylim([-2, 2])
axs[2].set_ylim([-2, 2])
axs[3].set_ylim([-2, 2])

axs[0].legend()
axs[1].legend()
axs[2].legend()
axs[3].legend()

plt.show()
11 changes: 5 additions & 6 deletions src/deepymod/data/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ def __init__(
"""A dataset class that loads the data, preprocesses it and lastly applies subsampling to it
Args:
load_function (func):Must return torch tensors in the format coordinates, data
load_function (func): Must return torch tensors in the format: (coordinates, data)
shuffle (bool, optional): Shuffle the data. Defaults to True.
apply_normalize (func)
apply_normalize (func): if not None, apply this function to the data for normalization. Defaults to None.
subsampler (Subsampler, optional): Add some subsampling function. Defaults to None.
load_kwargs (dict, optional): kwargs to pass to the load_function. Defaults to {}.
preprocess_kwargs (dict, optional): (optional) arguments to pass to the preprocess method. Defaults to { "random_state": 42, "noise_level": 0.0, "normalize_coords": False, "normalize_data": False, }.
Expand Down Expand Up @@ -122,9 +122,9 @@ def preprocess(
normalize_coords (bool): apply normalization to the coordinates
normalize_data (bool): apply normalization to the data
"""

print("Preprocessing data")
# add noise
y_processed = y + self.apply_noise(y, noise_level, random_state)
y_processed = self.apply_noise(y, noise_level, random_state)
# normalize coordinates
if normalize_coords:
X_processed = self.apply_normalize(X)
Expand All @@ -133,8 +133,7 @@ def preprocess(
# normalize data
if normalize_data:
y_processed = self.apply_normalize(y_processed)
else:
y_processed = y

return X_processed, y_processed

@staticmethod
Expand Down
10 changes: 5 additions & 5 deletions src/deepymod/data/burgers/burgers.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def burgers_delta(x: torch.tensor, t: torch.tensor, v: float, A: float):

u = (
torch.sqrt(v / (pi * t))
* ((torch.exp(R) - 1) * torch.exp(-(z ** 2)))
* ((torch.exp(R) - 1) * torch.exp(-(z**2)))
/ (1 + (torch.exp(R) - 1) / 2 * torch.erfc(z))
)
coords = torch.cat((t.reshape(-1, 1), x.reshape(-1, 1)), dim=1)
Expand All @@ -57,7 +57,7 @@ def burgers_cos(
[Tensor]: solution.
"""

z = v * k ** 2 * t
z = v * k**2 * t

u = (2 * v * a * k * torch.exp(-z) * torch.sin(k * x)) / (
b + a * torch.exp(-z) * torch.cos(k * x)
Expand Down Expand Up @@ -85,10 +85,10 @@ def burgers_sawtooth(x: torch.tensor, t: torch.tensor, v: float) -> torch.tensor
z_right = x - 4 * t - 2 * pi
l = 4 * v * (t + 1)

phi = torch.exp(-(z_left ** 2) / l) + torch.exp(-(z_right ** 2) / l)
phi = torch.exp(-(z_left**2) / l) + torch.exp(-(z_right**2) / l)
dphi_x = -2 * z_left / l * torch.exp(
-(z_left ** 2) / l
) - 2 * z_right / l * torch.exp(-(z_right ** 2) / l)
-(z_left**2) / l
) - 2 * z_right / l * torch.exp(-(z_right**2) / l)
u = -2 * v * dphi_x / phi + 4
coords = torch.cat((t.reshape(-1, 1), x.reshape(-1, 1)), dim=1)
return coords, u.view(-1, 1)
8 changes: 4 additions & 4 deletions src/deepymod/data/diffusion/diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ def diffusion_gaussian(
Returns:
[Tensor]: solution.
"""
u = (2 * pi * sigma ** 2 + 4 * pi * D * t) ** (-1 / 2) * torch.exp(
-((x - x0) ** 2) / (2 * sigma ** 2 + 4 * D * t)
u = (2 * pi * sigma**2 + 4 * pi * D * t) ** (-1 / 2) * torch.exp(
-((x - x0) ** 2) / (2 * sigma**2 + 4 * D * t)
)
coords = torch.cat((t.reshape(-1, 1), x.reshape(-1, 1)), dim=1)
return coords, u.view(-1, 1)
Expand Down Expand Up @@ -54,9 +54,9 @@ def advection_diffusion_gaussian_2d(
Returns:
[Tensor]: solution.
"""
u = (2 * pi * sigma ** 2 + 4 * pi * D * t) ** (-1) * torch.exp(
u = (2 * pi * sigma**2 + 4 * pi * D * t) ** (-1) * torch.exp(
-((x[:, 0:1] - x0[0] - v[0] * t) ** 2 + (x[:, 1:2] - x0[1] - v[1] * t) ** 2)
/ (2 * sigma ** 2 + 4 * D * t)
/ (2 * sigma**2 + 4 * D * t)
)
coords = torch.cat((t.reshape(-1, 1), x.reshape(-1, 2)), dim=1)
return coords, u.view(-1, 1)
Loading

0 comments on commit 8e68325

Please sign in to comment.