Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Chunking of light-cone #37

Open
BradGreig opened this issue Apr 24, 2020 · 6 comments
Open

[BUG] Chunking of light-cone #37

BradGreig opened this issue Apr 24, 2020 · 6 comments
Assignees
Labels
context: mcmc type: feature: code New ability in the code (user-side)
Milestone

Comments

@BradGreig
Copy link
Member

Describe the bug:
When using the Likelihood1DPowerLightcone module, the chunking of the light-cone using the user defined nchunks can result in asymmetric shapes for the chunks. This occurs because the light-cone is broken up into pieces defined by round(brightness_temp.n_slices / self.nchunks).

Expected behavior:
To me, the light-cone should be broken up into equal co-moving cubes, meaning that this should be replaced with self.user_params.HII_DIM instead.

@BradGreig BradGreig added the bug label Apr 24, 2020
@BradGreig
Copy link
Member Author

Hey @steven-murray I feel like we have had this discussion in the past, but can't find any record. Locally, my code has always created the chunk list as chunk_indices = list(range(0, LC.n_slices, user_params.HII_DIM)). Looking back through the history, it seems to have always been the way it is currently (@jaehongpark00 pointed this out to me).

Is there any reason why it is as it is currently? As I mention above, I think it should be as I suggest, to avoid many potential issues. In the past, I have always broken the chunks up into equal co-moving distance cubes (it just makes things a lot easier).

I am not going to make the change yet, in case I have overlooked a reason for why it is implemented as it is.

@steven-murray
Copy link
Member

Hi @BradGreig -- what's the potential issues that could arise? I think I implemented it like this just to make it as general as possible. I couldn't think of any reasons off the top of my head why the chunks had to be cubes. In practice, observations themselves aren't cubes, so if we want to reproduce that, I didn't want to restrict ourselves.

I can always set the default option to be cubes? Or are there big problems with not having cubes?

@BradGreig
Copy link
Member Author

Hi @steven-murray, technically there are no issues for the reasons you point out. However, I feel that in doing it this way there is more potential for user errors. The user must create their mock observation to be consistent with the expected chunking of the light-cone in the MCMC. If the expected chunking is not simply known before-hand (i.e. its a function of the user setup), then each time the user will need to be careful that they correctly construct the mock according to how the MCMC is going to chunk the light-cone.

If the default is to be equal comoving size, then the user only needs to chunk the mock according to the HII_DIM of the proposed MCMC setup.

Incorrectly chunking would result in cumulative offsets in the redshift of the chunked light-cones, meaning potentially sampling different evolution in the 21-cm signal. This is more likely to occur with the current setup than fixing on equal co-moving box size.

@steven-murray
Copy link
Member

Thanks @BradGreig .

This is partially mitigated by the LikelihoodComputationChain having a simulate_mock method which will compute a mock exactly as the input likelihood model requires. Nevertheless I see your point -- we do want to be careful to make it as easy as possible for the user to align the mock with the likelihood. Also, the current setup does not allow for chunking which omits some of the lightcone. Doing so may be useful in a real analysis in which non-adjacent spectral windows may be used.

What may be the most clear solution is to by default use equal comoving-space chunks, but take a parameter which specifies exact redshift ranges for each chunk, as a list of 2-tuples. This makes the chunking completely unambiguous and forces the user to think about what they're doing.

@BradGreig
Copy link
Member Author

Hi @steven-murray, yes I agree entirely with this. Default should be equal comoving-space chunks, with some option to enable alternative variations on this. This 2-tuple approach should enable all other variations that the user might require, which would be a nice feature.

@steven-murray
Copy link
Member

Great -- since this is API-level, we'll try get it in before v1.

@steven-murray steven-murray added type: feature: code New ability in the code (user-side) and removed enhancement labels Sep 30, 2020
@steven-murray steven-murray added this to the v1.1 milestone Jan 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
context: mcmc type: feature: code New ability in the code (user-side)
Projects
None yet
Development

No branches or pull requests

2 participants