Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gaussian source modeling #143

Merged
merged 22 commits into from
Jun 1, 2020
Merged

Gaussian source modeling #143

merged 22 commits into from
Jun 1, 2020

Conversation

rlbyrne
Copy link
Contributor

@rlbyrne rlbyrne commented Jan 3, 2019

Allows model catalogs to include Gaussian shape information for sources or source components. These changes do not affect runs unless the keyword gaussian_source_models is set.

@rlbyrne rlbyrne force-pushed the gaussian_source_models branch from 80db84e to eed34d7 Compare January 11, 2019 20:46
@nicholebarry nicholebarry force-pushed the gaussian_source_models branch 2 times, most recently from ce89446 to afb6dc9 Compare February 11, 2020 23:15
@nicholebarry
Copy link
Contributor

The gaussian source models branch is in a reasonable state for merging. However, I expected the results to be better, so there may be something missing. Comments welcome.

The general setup of implementing the Gaussian sources is to simulate a point source of the integrated flux, transform it to uv space, and then convolve the plane wave of the point source with the Fourier transform of the image-space Gaussian (which is analytically calculable). There are some nuances with projection effects on the orthoslant plane.

Since the sky is curved, some pixels in the orthoslant projection contain more of the sky. Here is a calculation of the degrees per pixel in the orthoslant projection.
Screen Shot 2020-03-20 at 11 50 44 am

Size
In order to get the shape of the convolving Gaussian right, I had to calculate the extent of the Gaussian as if it were at zenith. I'm not quite sure why. To achieve this, I calculated the standard deviations in degrees and then converted to orthoslant pixels via obs.degpix (=!RaDeg/(kbinsize*dimension)). The Gaussians that I get in the final image space were checked with Aegean, and are right (up to the degree that Aegean can source-find, so it's more of an order-of-magnitude check).

Flux
The orthoslant projection also modifies the flux. I have to decrease the amount of the point source flux by the area ratio between the projected Gaussian and the Gaussian at zenith (sigma_projected_ysigma_projected_x) / (sigma_zenith_ysigma_zenith_x). Again, not totally set on why. This is really hard to test on the final fits images, since these are in apparent flux. If I sum up the Gaussians and apply a beam^2 value at the center of the Gaussian, it seems as though the flux is slightly suppressed everywhere except zenith. Again, it's hard to tell whether that is a product of the assumptions made to get from apparent flux to flux, or a real result.

Results
Here are the final results, tested on Fornax, where Fornax is below the half power point of the beam. This is definitely a stress test.

Top left is calibrated data, top right is the model, and bottom left is the residual
Screen Shot 2020-03-20 at 11 51 56 am

This is the same, but with the color bar in saturation to highlight the residuals.
Screen Shot 2020-03-20 at 11 52 43 am

The residuals are on the order of 10%. I expected better. In particular, this oversubtracts on the edges of Fornax and undersubtracts on the inside of Fornax. This indicates that the size of the Gaussians could be a little tighter. I did investigate effects of refraction on the size/flux, but they are quite minimal.

Normalizations of the Gaussian DFT follow Jack's paper.

@nicholebarry
Copy link
Contributor

Issue #211 has been made to mark that this is still off by ~10%. I suggest merging it for now until someone can come along and figure out the issue. Does not affect point-source models

@mwilensky768
Copy link

@nicholebarry

Just checking my understanding since I have not kept up with this discussion. Basically, the setup totally confuses me. If I take a point source and smear it out with a gaussian in image space, this is equivalent to convolving a gaussian centered at zenith with a dirac-delta function located at the position of the point-source. That would indicate to me that in the fourier domain, I should multiply the plane-wave of the point-source by the fourier transform of the image-space gaussian. Based on your March 25 comment, the reverse is being done, i.e. convolving in uv rather than multiplying.

Perhaps my premise is incorrect about what the gaussian source modeling setup ought to be. Is it not supposed to be representing an extended source as a sum of gaussians in image space? I scratched some math out and found if you convolve in a plane-wave in uv with a gaussian and then IFT back, you will get a point-source in image space, but the amplitude will be damped by the gaussian (and this gaussian is actually centered at zenith so the damping is strong far away from zenith) rather than smeared by a gaussian located at the point-source. If there were lots of components, I could see how a sum of the damped components might look like a convolution, but be slightly off. Does this change anything?

@rlbyrne
Copy link
Contributor Author

rlbyrne commented May 26, 2020

@mwilensky768 Not to speak for Nichole here, but I think she misspoke when she said "convolve" in the March 25 comment. The code calculates a plane wave in UV space and then windows (multiplies) by a Gaussian.

@mwilensky768
Copy link

@rlbyrne

Thanks! I think I see the multiply happening in source_dft which clears up my confusion.

@mwilensky768
Copy link

OK, if people are happy to merge this now given the very reasonable performance, I think there just should be a small addition to the documentation that permalinks to this issue as long as it is relevant. That way users can understand what to expect if they try to use this part of the code.

Just thinking on the side, no need to investigate this before merging:

Units of area in image space can be a source of confusion when imaging. Is it possible that there are two units (say, pixel size and beam) that are very similar in size, and that the normalization is following one unit when it should follow the other?

@rlbyrne
Copy link
Contributor Author

rlbyrne commented May 26, 2020

We've thought carefully about normalization. One confusing point on normalization: the Gaussian models presented here have an RA extent that is given in what I would call "delta RA" units, corresponding to the change of RA coordinates (in degrees) across the extent of the source. This is not equivalent to the true size of the source in degrees. This is documented in the code.

@mwilensky768
Copy link

Great. Sounds like a mystery for future work.

If we can just get some docs (maybe in the keyword dictionary, or examples) that point to this PR, I'd be happy to approve.

Copy link

@mwilensky768 mwilensky768 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just leaving the official review comment here. Requests are mentioned in previous comments.

@nicholebarry nicholebarry requested a review from mwilensky768 May 31, 2020 00:55
mwilensky768
mwilensky768 previously approved these changes Jun 1, 2020
Copy link

@mwilensky768 mwilensky768 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent, thank you!

@nicholebarry nicholebarry merged commit f395dea into master Jun 1, 2020
@nicholebarry nicholebarry deleted the gaussian_source_models branch June 1, 2020 03:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants