Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement the aExp and bExp options #40

Closed
sseraj opened this issue Jan 19, 2021 · 4 comments · Fixed by #64
Closed

Implement the aExp and bExp options #40

sseraj opened this issue Jan 19, 2021 · 4 comments · Fixed by #64
Assignees
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed

Comments

@sseraj
Copy link
Collaborator

sseraj commented Jan 19, 2021

Description of feature

The aExp and bExp values are currently hard-coded in most of the Fortran code. We should complete the partial implementation to allow users to set these values in Python.

Potential solution

This will mainly involve modifying how the interpolation weights are computed in kd_tree.F90. See here for one example.

@ewu63 ewu63 added the enhancement New feature or request label Jan 19, 2021
@ewu63 ewu63 added good first issue Good for newcomers help wanted Extra attention is needed labels Nov 18, 2021
@sseraj sseraj assigned sseraj and unassigned eirikurj Nov 19, 2021
@sseraj
Copy link
Collaborator Author

sseraj commented Nov 19, 2021

I implemented this on my fork here. I will open a PR once I am confident that it's working.

@joanibal
Copy link
Collaborator

Is there any noticeable performance hit?

@sseraj
Copy link
Collaborator Author

sseraj commented Nov 22, 2021

Yes, here is an example from one of the tests:

Before changes:
(mpi) tests/test_usmesh.py:Test_USmesh_0.test_comesh ... OK (00:00:8.20, 120 MB)

After changes:
(mpi) tests/test_usmesh.py:Test_USmesh_0.test_comesh ... OK (00:00:19.65, 121 MB)

However, the tests exaggerate the difference because the increased cost is only in the computation of the weights, which is done once at the start of the optimization.

The increased cost could be annoying when running the same case repeatedly for debugging purposes but addressing #37 would help with that.

@joanibal
Copy link
Collaborator

Ok thanks for the information.

I'm not really worried about 10 seconds every iteration. This feature is definitely very useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants