Skip to content

Commit

Permalink
Merge pull request #16 from mesh-adaptation/pyproject
Browse files Browse the repository at this point in the history
Move to `pyproject.toml` approach
  • Loading branch information
jwallwork23 authored Sep 18, 2024
2 parents 0a27351 + aff6c0d commit c959b4f
Show file tree
Hide file tree
Showing 115 changed files with 5,117 additions and 6,435 deletions.
29 changes: 24 additions & 5 deletions .github/workflows/test_warpmesh.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,34 +10,53 @@ on:
jobs:
test-warpmesh:
name: Test WarpMesh
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
container:
image: firedrakeproject/firedrake:latest
options: --user root
steps:
- uses: actions/checkout@v3

- name: Cleanup
if: ${{ always() }}
run: |
cd ..
rm -rf build
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: 3.8

- name: Lint check
if: ${{ always() }}
run: |
. /home/firedrake/firedrake/bin/activate
python3 -m pip install ruff
ruff check
- name: Install Movement
run: |
. /home/firedrake/firedrake/bin/activate
git clone https://github.com/mesh-adaptation/movement.git
cd movement
python3 -m pip install -e .
- name: Install PyTorch
run: |
. /home/firedrake/firedrake/bin/activate
python3 -m pip install torch --index-url https://download.pytorch.org/whl/cpu
- name: Install PyTorch3d
run: |
. /home/firedrake/firedrake/bin/activate
python3 -m pip install 'git+https://github.com/facebookresearch/pytorch3d.git'
- name: Install other WarpMesh dependencies
run: |
. /home/firedrake/firedrake/bin/activate
python3 -m pip install -r requirements.txt
- name: Install WarpMesh
run: |
. /home/firedrake/firedrake/bin/activate
python3 -m pip install -e .
- name: Run WarpMesh test suite
run: |
. /home/firedrake/firedrake/bin/activate
Expand Down
4 changes: 0 additions & 4 deletions .gitmodules

This file was deleted.

7 changes: 7 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.3.7
hooks:
- id: ruff # run the linter
args: [ --fix ]
- id: ruff-format # run the formatter
121 changes: 75 additions & 46 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,54 +6,85 @@

## 🔎 About this project:

In the famous TV series Star Trek, the starship Enterprise is able to travel faster than light by warping space-time. In this project, we 'warp' the underlying mesh of a discretised PDE problem to win some computational time. The node of the mesh is moved to the ideal position guided by a GNN, which is supposed to be faster than numerical solvers.
In the famous TV series Star Trek, the starship Enterprise is able to travel
faster than light by warping space-time. In this project, we 'warp' the
underlying mesh of a discretised PDE problem to win some computational time.
The node of the mesh is moved to the ideal position guided by a Graph Neural
Network (GNN), which is supposed to be faster than numerical solvers.


The latest test status:

[![WarpMesh](https://github.com/chunyang-w/WarpMesh/actions/workflows/test_warpmesh.yml/badge.svg)](https://github.com/chunyang-w/WarpMesh/actions/workflows/test_warpmesh.yml)
[![WarpMesh](https://github.com/mesh-adaptation/UM2N/actions/workflows/test_warpmesh.yml/badge.svg)](https://github.com/mesh-adaptation/UM2N/actions/workflows/test_warpmesh.yml)


## 🛠️ Installation

### The easy way:

Just navigate to **project root** folder, open terminaland execute the `install/install.sh` shell script:
### All-in-one script

Just navigate to **project root** folder, open terminal and execute the
`install.sh` shell script:
``` shell
sh ./install/install.sh
./install.sh
```
This will install [Firedrake](https://www.firedrakeproject.org/download.html)
and [Movement](https://github.com/mesh-adaptation/movement) under the `install`
folder, as well as the `WarpMesh` package.


This will install [firedrake](https://www.firedrakeproject.org/download.html) and [movement](https://github.com/mesh-adaptation/movement) under `install` folder, as well as the `WarpMesh` package.
### Step-by-step approach

1. The mesh generation relies on Firedrake, which is a Python package. To
install Firedrake, please follow the instructions on
[firedrakeproject.org](https://www.firedrakeproject.org/download.html).

### The hard way (in case the easy way did not went well or you want to challenge yourself):
2. Use the virtual environment provided by Firedrake to install the dependencies
of this project. The virtual environment is located at
`/path/to/firedrake/bin/activate`. To activate the virtual environment, run
`source /path/to/firedrake/bin/activate`.

1. The mesh generation relies on firedrake, which is a python package. To install firedrake, please follow the instructions on [firedrakeproject.org](https://www.firedrakeproject.org/download.html).
3. The movement of the mesh is implemented by
[mesh-adaptation/movement](https://github.com/mesh-adaptation/movement).
To install it in the Firedrake virtual environment, follow these
[instructions](https://github.com/mesh-adaptation/mesh-adaptation-docs/wiki/Installation-Instructions).

2. The movement of the mesh is implemented by `mesh-adaptation/movement`, install it in the firedrake venv. To install, run `pip install -e .` in the `mesh-adaptation/movement` directory. Here is a link to that repo: [mesh-adaptation/movement](https://github.com/mesh-adaptation/movement).
4. Install PyTorch into the virtual environment by following the instructions
on the [PyTorch webpage](https://pytorch.org/get-started/locally).

3. Use the venv provided by firedrake to install the dependencies of this project. The venv is located at `~/firedrakevenv/bin/activate`. To activate the venv, run `source ~/firedrakevenv/bin/activate`. Then run `pip install -r requirements.txt` to install the dependencies.
5. Install PyTorch3d into the virtual environment by running the command
```
python3 -m pip install "git+https://github.com/facebookresearch/pytorch3d"
```

4. Run `pip install -e .` in the root directory of this project to install the package.
6. Run `pip install .` in the root directory of this project to install the
package and its other dependencies.


## 💿 Dataset generation

In case you do not wish to generate the dataset by yourself, here is a pre-generated dataset on google drive: [link](https://drive.google.com/drive/folders/1sQ-9zWbTryCXwihqaqazrQ4Vp1MRdBPK?usp=sharing) In this folder you can find all cases used to train/test the model. The naming convention of the file is 'z=<0,1>_n_dist={number_of_distribution_used}_max_dist={maximum_distribution_used}_<{number_of_grid_in_x_direction}_{number_of_grid_in_y_direction}>_n={number_of_samples}_{data_set_type}'
In case you do not wish to generate the dataset by yourself, here is a
pre-generated dataset on Google Drive:
[link](https://drive.google.com/drive/folders/1sQ-9zWbTryCXwihqaqazrQ4Vp1MRdBPK?usp=sharing).
In this folder you can find all cases used to train/test the model. The naming
convention of the file is 'z=<0,1>_n_dist={number_of_distribution_used}_max_dist={maximum_distribution_used}_<{number_of_grid_in_x_direction}_{number_of_grid_in_y_direction}>_n={number_of_samples}_{data_set_type}'

if `n_dist = None`, then the number of gaussian distribution used will be randomly choosed from 1 to `max_dist`, otherwise, `n_dist` will be used to generate a fixed number of gaussian distribution version dataset.
If `n_dist = None`, then the number of Gaussian distribution used will be
randomly chosen from 1 to `max_dist`, otherwise, `n_dist` will be used to
generate a fixed number of Gaussian distribution version dataset.

the {data_set_type} will be either 'smpl' or 'cmplx', indicating whether the dataset is isotropic or anisotropic.
The {data_set_type} will be either `'smpl'` or `'cmplx'`, indicating whether the
dataset is isotropic or anisotropic.

after download, you should put the downloaded folder `helmholtz` under `data/dataset` folder.
After download, you should put the downloaded folder `helmholtz` under
`data/dataset` folder.

### Generate the dataset by yourself

```{shell}
. script/make_dataset.sh
```
This command will make following datasets by solving Monge-Ampère eq with following PDEs:
This command will make following datasets by solving Monge-Ampère equation with
the following PDEs:

+ Burgers equation (on square domain)
+ Helmholtz equation (both square/random polygon domain)
Expand All @@ -66,52 +97,53 @@ n_dist_end=10
n_grid_start=15
n_grid_end=35
```

defined in `script/make_dataset.sh` to generate datasets of different sizes.

The number of samples in the dataset can be changed by modifying the variable `n_sample` in `script/build_helmholtz_dataset`.
The number of samples in the dataset can be changed by modifying the variable
`n_sample` in `script/build_helmholtz_dataset`.

## 🚀 Train the model

A training notebook is provided: `script/train_warpmesh.ipynb`. Further training details can be found in the notebook.
A training notebook is provided: `script/train_warpmesh.ipynb`. Further training
details can be found in the notebook.

Here is also a link to pretrained models: [link](https://drive.google.com/drive/folders/1P_JMpU1qmLdmbGTz8fL5VO-lEBoP3_2n?usp=sharing)
Here is also a link to pre-trained models:
[link](https://drive.google.com/drive/folders/1P_JMpU1qmLdmbGTz8fL5VO-lEBoP3_2n?usp=sharing)

## 📊 Evaluate the model

There are a set of visulisation scipypt under `script/` folder. The script can be used to evaluate the model performance.
There are a set of visualisation script under `script/` folder. The script can
be used to evaluate the model performance.

**Bear in mind that the path to datasets/model_weight in those files need calibration**
**Bear in mind that the path to datasets/model_weight in those files need
calibration**

## 📖 Documentation
The documentation is generated by sphinx. To build the documentation,under the `docs` folder.
The documentation is generated by Sphinx. To build the documentation, under the
`docs` folder.


## 🧩 Project Layout

```
├── warpmesh (impelementation of the project)
├── warpmesh (Implementation of the project)
│ ├── __init__.py
│ ├── generator (dataset generator)
│ ├── processor (data processor)
│ ├── helper (helper functions)
│ ├── loader (customized dataset & dataloader)
│ ├── generator (Dataset generator)
│ ├── processor (Data processor)
│ ├── helper (Helper functions)
│ ├── loader (Customized dataset and dataloader)
│ ├── model (MRN and M2N model implementation)
│ └── test (simpe tests for model)
├── install (installation scripts for dependencies)
│ ├── firedrake
│ ├── install.sh
│ └── movement
├── data (place gnereated dataset here)
│ └── test (Simple tests for the model)
├── data (Datasets are generated here)
│ ├── dataset
│ └── output
├── docs (documentation)
├── docs (Documentation)
│ ├── conf.py
│ └── index.rst
├── script (Utility scripts)
│ ├── make_dataset.sh (make datasets fo different sizes)
│ ├── build_helmholtz_dataset.py (build helmholtz dataset)
│ ├── compare.py (compare the performance of different models)
│ ├── make_dataset.sh (Script for making datasets of different sizes)
│ ├── build_helmholtz_dataset.py (Build helmholtz dataset)
│ ├── compare.py (Compare the performance of different models)
│ ├── evaluate.py
│ ├── gradual_change.py
│ ├── plot.py
Expand All @@ -122,15 +154,12 @@ The documentation is generated by sphinx. To build the documentation,under the `
│ ├── play_dataset.py
│ ├── test_import.py
│ └── ...
├── setup.py
├── requirements.txt
├── README.md
├── install.sh (Installation script for UM2N and its dependencies)
├── pyproject.toml (Top-level metadata for Python project)
└── README.md (Project summary and useful information)
```


## Useful thing: delete plot dir

## Useful thing: delete plot directory

```
find ./ -type d -name "plot" -exec rm -rf {} +
Expand Down
35 changes: 18 additions & 17 deletions combine_vis.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
import glob
import os
import pickle
import glob
import yaml

import firedrake as fd
import matplotlib.pyplot as plt

# import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import firedrake as fd
import yaml

# model_names = ["M2N", "M2N", "M2T", "M2T"]
# run_ids = ["jetaq10f", "dglbbrdq", "m9fqgqnb", "boj2eks9"]
Expand All @@ -22,7 +23,7 @@


# model_names = ["M2N", "MRTransformer", "M2T"]#, "M2T"]
model_names = ["M2N", "M2N_T"]#, "M2T"]
model_names = ["M2N", "M2N_T"] # , "M2T"]
# run_ids = ["cyzk2mna", "u4uxcz1e", "99zrohiu", "ig1np6kx"]
# run_id_model_mapping = {
# "cyzk2mna": "M2N",
Expand All @@ -32,13 +33,13 @@
# }

# run_ids = ["g86hj04w", "3sicl8ny", "npouut8z"]#, "32gs384i"]
run_ids = ["g86hj04w", "n4t1fqq2"]#, "32gs384i"]
run_ids = ["g86hj04w", "n4t1fqq2"] # , "32gs384i"]
run_id_model_mapping = {
"g86hj04w": "M2N",
# "4u40se08": "M2N-en",
# "3sicl8ny": "MRN",
# "npouut8z": "M2T-w-edge",
"n4t1fqq2": "UM2N"
"n4t1fqq2": "UM2N",
}

trained_epoch = 999
Expand Down Expand Up @@ -92,7 +93,7 @@

num_vis = 100
rows = 3
head_cols = 3 -1
head_cols = 3 - 1
cols = head_cols + len(run_ids)
for n_v in range(num_vis):
print(f"=== Visualizing number {n_v} of {dataset_name} ===")
Expand All @@ -114,12 +115,12 @@
mesh_model = fd.UnitSquareMesh(n_grid, n_grid)
mesh_fine = fd.UnitSquareMesh(100, 100)
else:
mesh_og = fd.Mesh(os.path.join(dataset_path, "mesh", f"mesh.msh"))
mesh_MA = fd.Mesh(os.path.join(dataset_path, "mesh", f"mesh.msh"))
mesh_og = fd.Mesh(os.path.join(dataset_path, "mesh", "mesh.msh"))
mesh_MA = fd.Mesh(os.path.join(dataset_path, "mesh", "mesh.msh"))
mesh_fine = fd.Mesh(
os.path.join(dataset_path, "mesh_fine", f"mesh.msh")
os.path.join(dataset_path, "mesh_fine", "mesh.msh")
)
mesh_model = fd.Mesh(os.path.join(dataset_path, "mesh", f"mesh.msh"))
mesh_model = fd.Mesh(os.path.join(dataset_path, "mesh", "mesh.msh"))
else:
raise Exception(f"{problem_type} not implemented.")

Expand Down Expand Up @@ -312,10 +313,10 @@

# High resolution mesh
fd.triplot(mesh_fine, axes=ax[0, 0])
ax[0, 0].set_title(f"High resolution Mesh (100 x 100)")
ax[0, 0].set_title("High resolution Mesh (100 x 100)")
# Orginal low resolution uniform mesh
fd.triplot(mesh_og, axes=ax[0, 1])
ax[0, 1].set_title(f"Original uniform Mesh")
ax[0, 1].set_title("Original uniform Mesh")
# # Adapted mesh (MA)
# fd.triplot(mesh_MA, axes=ax[0, 2])
# ax[0, 2].set_title(f"Adapted Mesh (MA)")
Expand All @@ -324,13 +325,13 @@
cb = fd.tripcolor(
u_exact, cmap=cmap, vmax=solution_v_max, vmin=solution_v_min, axes=ax[1, 0]
)
ax[1, 0].set_title(f"Solution on High Resolution (u_exact)")
ax[1, 0].set_title("Solution on High Resolution (u_exact)")
plt.colorbar(cb)
# Solution on orginal low resolution uniform mesh
cb = fd.tripcolor(
u_og, cmap=cmap, vmax=solution_v_max, vmin=solution_v_min, axes=ax[1, 1]
)
ax[1, 1].set_title(f"Solution on uniform Mesh")
ax[1, 1].set_title("Solution on uniform Mesh")
plt.colorbar(cb)
# # Solution on adapted mesh (MA)
# cb = fd.tripcolor(
Expand All @@ -341,7 +342,7 @@

# Monitor values
cb = fd.tripcolor(monitor_values, cmap=cmap, axes=ax[2, 0])
ax[2, 0].set_title(f"Monitor values")
ax[2, 0].set_title("Monitor values")
plt.colorbar(cb)

# Error on orginal low resolution uniform mesh
Expand Down
1 change: 0 additions & 1 deletion combine_vis_plot.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import matplotlib.pyplot as plt
import pickle

model_names = ["M2N", "M2N", "MRTransformer", "M2T"]
Expand Down
Loading

0 comments on commit c959b4f

Please sign in to comment.