From e41c450d78e962b9dc7cdd5206e9c66c55745959 Mon Sep 17 00:00:00 2001 From: mingrui Date: Sat, 12 Oct 2024 00:54:09 +0800 Subject: [PATCH] update readme --- README.md | 28 +++++----------------------- 1 file changed, 5 insertions(+), 23 deletions(-) diff --git a/README.md b/README.md index 117aa4d..0953412 100644 --- a/README.md +++ b/README.md @@ -10,20 +10,9 @@ Additional information: [[Project page]](https://erizmr.github.io/UM2N/)

- - - - - - ## πŸ”Ž Abstract Solving complex Partial Differential Equations (PDEs) accurately and efficiently is an essential and challenging problem in all scientific and engineering disciplines. Mesh movement methods provide the capability to improve the accuracy of the numerical solution without increasing the overall mesh degree of freedom count. Conventional sophisticated mesh movement methods are extremely expensive and struggle to handle scenarios with complex boundary geometries. However, existing learning-based methods require re-training from scratch given a different PDE type or boundary geometry, which limits their applicability, and also often suffer from robustness issues in the form of inverted elements. In this paper, we introduce the Universal Mesh Movement Network (UM2N), which -- once trained -- can be applied in a non-intrusive, zero-shot manner to move meshes with different size distributions and structures, for solvers applicable to different PDE types and boundary geometries. UM2N consists of a Graph Transformer (GT) encoder for extracting features and a Graph Attention Network (GAT) based decoder for moving the mesh. We evaluate our method on advection and Navier-Stokes based examples, as well as a real-world tsunami simulation case. Our method outperforms existing learning-based mesh movement methods in terms of the benchmarks described above. In comparison to the conventional sophisticated Monge-AmpΓ¨re PDE-solver based method, our approach not only significantly accelerates mesh movement, but also proves effective in scenarios where the conventional method fails. - The latest test status: @@ -42,7 +31,11 @@ Just navigate to **project root** folder, open terminal and execute the ``` This will install [Firedrake](https://www.firedrakeproject.org/download.html) and [Movement](https://github.com/mesh-adaptation/movement) under the `install` -folder, as well as the `WarpMesh` package. +folder, as well as the `WarpMesh` package. Note that the pytorch installed is a cpu version. + +- GPU (cuda) support +For gpu support, please execute the: `install_gpu.sh {CUDA_VERSION}`. +E.g. `install_gpu.sh 118` for a CUDA version 11.8. ### Step-by-step approach @@ -172,14 +165,3 @@ The documentation is generated by Sphinx. To build the documentation, under the └── README.md (Project summary and useful information) ``` -