Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Iterative IC generation #66

Open
wants to merge 69 commits into
base: master
Choose a base branch
from
Open

WIP: Iterative IC generation #66

wants to merge 69 commits into from

Conversation

mladenivkovic
Copy link
Collaborator

Heya

Here's a first working version of the IC making algorithm. I extended it to work in 1D, 2D, 3D, and with arbitrary boxsizes.
It's at a reviewable state, so please have a look at it when you get the time, and let me know what you think and what needs to be improved.
I haven't finished the documentation yet, but I left a script in a new directory /tmp/, where you can run it if you want to.

Todo's left:

  • finish testing
  • remove printing and plotting stuff (still needed for testing)
  • proper documentation (in progress)
  • unit tests
  • remove /tmp/ directory when we don't need it any longer

@mladenivkovic mladenivkovic requested a review from JBorrow July 27, 2020 02:08
@mladenivkovic mladenivkovic added the enhancement New feature or request label Jul 27, 2020
Copy link
Member

@JBorrow JBorrow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should move to an object-oriented approach here, as it would fit well.

from typing import Union


def W_cubic_spline(r: np.ndarray, H: Union[float, np.ndarray]):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel comfortable having so many different types of kernels like this.

We should centralise them (the ones used for visualisation, and the ones used for this) somewhere.

The one problem with that is inlining them within the compiled (future) versions of these functions. We'll have to think about that later, but let's leave the kernels here for now - we will deal with that last.

swiftsimio/initial_conditions/IC_kernel.py Outdated Show resolved Hide resolved
swiftsimio/initial_conditions/IC_kernel.py Show resolved Hide resolved
swiftsimio/initial_conditions/generate_particles.py Outdated Show resolved Hide resolved
swiftsimio/initial_conditions/generate_particles.py Outdated Show resolved Hide resolved
swiftsimio/initial_conditions/generate_particles.py Outdated Show resolved Hide resolved
swiftsimio/initial_conditions/generate_particles.py Outdated Show resolved Hide resolved
swiftsimio/initial_conditions/generate_particles.py Outdated Show resolved Hide resolved
swiftsimio/initial_conditions/generate_particles.py Outdated Show resolved Hide resolved
swiftsimio/initial_conditions/generate_particles.py Outdated Show resolved Hide resolved
@mladenivkovic
Copy link
Collaborator Author

The ParticleGenerator() now has an internal random number generator that doesn't affect the global numpy one outside of it.

@JBorrow
Copy link
Member

JBorrow commented Aug 24, 2020

I currently get a crash with the following input:

"""
Makes Sod Shock initial conditions using the particle generator.
"""

import numpy as np
import unyt

from swiftsimio.initial_conditions import ParticleGenerator

def rho(x, ndim):
    """
    Density field in the initial SodShock.
    """

    output = np.ones(len(x), dtype=np.float32)
    output[x[:, 0] > 1.0] = 0.125

    return output

boxsize = unyt.unyt_array([2.0, 1.0, 1.0], "cm")
unit_system = unyt.unit_systems.cgs_unit_system
number_of_particles = 32
ndim = 3

generator = ParticleGenerator(
    rho,
    boxsize,
    unit_system,
    number_of_particles,
    ndim
)

generator.initial_setup()
generator.run_iteration()

What am I doing wrong?

Error:

Traceback (most recent call last):
  File "make_sod_ics.py", line 34, in <module>
    generator.run_iteration()
  File "/Users/mphf18/Documents/GitHub/swiftsimio/swiftsimio/initial_conditions/generate_particles.py", line 1359, in run_iteration
    self.iteration_step(iteration)
  File "/Users/mphf18/Documents/GitHub/swiftsimio/swiftsimio/initial_conditions/generate_particles.py", line 1111, in iteration_step
    h, rho = self.compute_h_and_rho()
  File "/Users/mphf18/Documents/GitHub/swiftsimio/swiftsimio/initial_conditions/generate_particles.py", line 1048, in compute_h_and_rho
    tree = KDTree(self.x, boxsize=boxsize)
  File "ckdtree.pyx", line 539, in scipy.spatial.ckdtree.cKDTree.__init__
ValueError: Some input data are greater than the size of the periodic box.

@mladenivkovic
Copy link
Collaborator Author

A small error snuck in in the past few commits. This should fix it, I was able to run a few steps with it.

@JBorrow
Copy link
Member

JBorrow commented Aug 24, 2020

Excellent, thanks - that fixes it. At the moment it's quite slow, but I have a 10X speedup in my version that's now bounded by the tree search speed.

@JBorrow
Copy link
Member

JBorrow commented Aug 25, 2020

We build the tree from self.x twice in iteration_step - once for computing h, rho, and the second time for use in the offset calculation - is there a specific reason for this?

@mladenivkovic
Copy link
Collaborator Author

Yes. The tree doesn't track changes in self.x, so every time you update self.x, e.g. when redistributing a handful of particles, or moving all of them, you'll get bogus results if you don't rebuild the tree.

@JBorrow
Copy link
Member

JBorrow commented Aug 25, 2020

Yes, but on line 1117 of generate_particles.py if redistribute is False, we've just built the tree twice for no reason.

@mladenivkovic
Copy link
Collaborator Author

mladenivkovic commented Aug 25, 2020

True. But I didn't think that would be a problem, typically the dumps are only every 100ish iterations. If you think that'll be a bottleneck, it's a simple fix with

if not dump_now or redistribute:
    # build tree
    tree = KDTree(...)

@JBorrow
Copy link
Member

JBorrow commented Aug 25, 2020

Right, but don't we only need to re-build the tree if there is a redistribute?

@mladenivkovic
Copy link
Collaborator Author

Yes, but the dumping needs the smoothing lengths first. So if you dumped before, don't rebuild the tree unless we also redistributed in the same step.

@JBorrow
Copy link
Member

JBorrow commented Aug 25, 2020

How frequent are redistributes ?

@mladenivkovic
Copy link
Collaborator Author

As frequent as you set them. I don't recommend going below every 20 iterations.

@JBorrow
Copy link
Member

JBorrow commented Aug 25, 2020

Ok, but then we are double-tree-building when we don't need to almost every step, right?

@JBorrow
Copy link
Member

JBorrow commented Aug 25, 2020

I only ask this because 10% of the run-time is currently spent in compute_h_and_rho

@mladenivkovic
Copy link
Collaborator Author

What do you mean every step? Default values are redistribution every 20th step and dump every 50th step, is that almost every step for you?

@JBorrow
Copy link
Member

JBorrow commented Aug 25, 2020

Oh :). I see how it works now. Odd that we spend so much time in that function, I will look at optimising it some more.

@mladenivkovic
Copy link
Collaborator Author

Are we giving up on this?

@JBorrow
Copy link
Member

JBorrow commented Nov 29, 2020

i want to come back to it, as it will be an excellent feature to have... but it needs to be significantly faster

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants