Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add spatial slice parallelization tool #507

Merged
merged 13 commits into from
Jan 11, 2019

Conversation

keflavich
Copy link
Contributor

This depends on #506. It splits out the parallelization stuff, and the memory mapped application, into spatial and spectral components

@keflavich keflavich force-pushed the spatial_parallelization branch 2 times, most recently from 7305442 to c9e412f Compare November 30, 2018 20:47
@keflavich
Copy link
Contributor Author

I believe this works as is, but we don't use it anywhere yet. More work is needed to get something like reproject working, since the parallelizer presently assumes the input and output cubes are the same shape. Smoothing should be easy to parallelize though...

@keflavich
Copy link
Contributor Author

@astrofrog @e-koch want to give this a quick review? I think the concept is reasonably solid. I could use tips on how to reasonably test this... we probably want some performance tests to go along with the correctness tests.

@astrofrog should this be booted in favor of dasktropyarrayquantity magic?

Copy link
Contributor

@e-koch e-koch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parallel framework looks good to me. Apart from the tests, we also need some way to update the mask. Can we use a LazyMask on the memmap file?

For the tests, is specifying parallel=True with one core enough to test the joblib parts? The unfortunate part is that we probably need each of these methods tested with and without parallelizing.

What did you have in mind for performance tests?

spectral_cube/spectral_cube.py Show resolved Hide resolved
spectral_cube/spectral_cube.py Show resolved Hide resolved
@keflavich
Copy link
Contributor Author

I don't have a clear plan for performance tests.

What are you thinking about in terms of modifying the mask? I don't think we necessarily need any way to update the mask on the fly within the parallelized loop. We can just update it after the fact. No?

@e-koch
Copy link
Contributor

e-koch commented Jan 6, 2019

I agree with updating the mask after the operation. Much simpler. I can't remember if there was a specific operation that I thought would require updating the mask within the loop.

@keflavich
Copy link
Contributor Author

By default, I don't think we should do anything to the mask, then. We should leave it to the user (or the specific operation) to update the mask after running apply_function_parallel_spatial. At the moment, we don't have any functions using apply_function_parallel_spatial, but we should come up with some so we have some practical use and test cases in hand.

@e-koch
Copy link
Contributor

e-koch commented Jan 7, 2019

Alright. It's probably worth showing the mask updating step in one of the examples, then.

How about converting convolve_to to use the new spatial function? spatial_smooth and spatial_median_smooth could be, too. Then we could just expand the existing test cases and docs for those functions.

@keflavich
Copy link
Contributor Author

Good idea. Next time I have a few hours to put into this, I'll do it.

to completely exclude these functions from
VaryingResolutionSpectralCubes, since you might want to apply stuff
along the spectral axis even if the resolution varies (though...
probably not if they vary more than a given threshold, so we should
raise warnings).
@coveralls
Copy link

coveralls commented Jan 9, 2019

Coverage Status

Coverage increased (+0.2%) to 74.887% when pulling 5dcded1 on keflavich:spatial_parallelization into 28ff5c2 on radio-astro-tools:master.

@keflavich
Copy link
Contributor Author

Since the kernel changes with slice in VaryingResolutionSpectralCube.convolve_to, it is not straightforward to convert that function, unfortunately. If you have ideas on how to do that, please chime in, but we'll have to punt that to a new issue for now.

@keflavich
Copy link
Contributor Author

@e-koch the smooth tests now actually test this. I think it's good to merge?

@e-koch
Copy link
Contributor

e-koch commented Jan 11, 2019

LGTM!

@keflavich keflavich merged commit 10eca91 into radio-astro-tools:master Jan 11, 2019
@keflavich keflavich deleted the spatial_parallelization branch January 11, 2019 03:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants