Skip to content

Commit

Permalink
Ran bib cleaning script
Browse files Browse the repository at this point in the history
  • Loading branch information
Siddharth Mishra-Sharma committed Sep 4, 2019
1 parent 1b7ff60 commit 5593af2
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 8 deletions.
12 changes: 7 additions & 5 deletions draft/lensing-lfi.bib
Original file line number Diff line number Diff line change
Expand Up @@ -1760,10 +1760,10 @@ @article{1908.06983
{Treu}, Tommaso and {Du}, Xiaolong and {Benson}, Andrew},
eid = {arXiv:1908.06983},
eprint = {1908.06983},
journal = {arXiv e-prints},
journal = {},
keywords = {Astrophysics - Cosmology and Nongalactic Astrophysics, Astrophysics - Astrophysics of Galaxies},
month = {Aug},
pages = {arXiv:1908.06983},
pages = {},
primaryclass = {astro-ph.CO},
title = {{Warm dark matter chills out: constraints on the halo mass function and the free-streaming length of dark matter with 8 quadruple-image strong gravitational lenses}},
year = {2019}
Expand Down Expand Up @@ -2789,16 +2789,18 @@ @article{2019arXiv190507488G
}

@article{Alsing:2017var,
adsnote = {Provided by the SAO/NASA Astrophysics Data System},
adsurl = {https://ui.adsabs.harvard.edu/abs/2018MNRAS.476L..60A},
archiveprefix = {arXiv},
author = {Alsing, Justin and Wandelt, Benjamin},
bdsk-url-1 = {https://doi.org/10.1093/mnrasl/sly029},
author = {{Alsing}, Justin and {Wandelt}, Benjamin},
doi = {10.1093/mnrasl/sly029},
eprint = {1712.00012},
journal = {\mnras},
keywords = {methods: data analysis, Astrophysics - Cosmology and Nongalactic Astrophysics},
month = {May},
number = {1},
pages = {L60-L64},
primaryclass = {astro-ph.CO},
slaccitation = {%%CITATION = ARXIV:1712.00012;%%},
title = {{Generalized massive optimal data compression}},
volume = {476},
year = {2018}
Expand Down
7 changes: 4 additions & 3 deletions draft/lensing-lfi.tex
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,6 @@ \subsection{Calibration}

We will show results both without and with calibration. Where calibration is used, it is based on histograms with 50 bins, with bin boundaries determined automatically to match the distribution of likelihood ratios.


\subsection{Inference}
\label{sec:lfi-inference}

Expand Down Expand Up @@ -396,7 +395,7 @@ \subsection{Inference}
\label{eq:bayesian_post}
\end{align}
%
where $\{x_i\}$ is the set of observed lens images and $p(\stattheta)$ is the prior on the parameters of interest, which may be different from the proposal distribution $\pi(\stattheta)$ used during the generation of training data. The posterior can thus be directly calculated given an estimator $\hat{r}$, provided that the space of the parameters of interest is low-dimensional enough to calculate the integral, or with MCMC~\cite{Hermans:2019ioj} or variational inference techniques otherwise.
where $\{x_i\}$ is the set of observed lens images and $p(\stattheta)$ is the prior on the parameters of interest, which may be different from the proposal distribution $\pi(\stattheta)$ used during the generation of training data. The posterior can thus be directly calculated given an estimator $\hat{r}$, provided that the space of the parameters of interest is low-dimensional enough to calculate the integral, or with MCMC~\citep{Hermans:2019ioj} or variational inference techniques otherwise.

\bigskip
While our approach to inference is strongly based on the ideas in \citet{1805.00013, 1805.00020, 1805.12244, Stoye:2018ovl}, there are some novel features in our analysis that we would like to highlight briefly. Unlike in those earlier papers, we use a marginal model based on the proposal distribution $\pi(\stattheta)$ as reference model in the denominator of the likelihood ratio, which substantially improves the numerical stability of the algorithm. This choice also allows us to include the ``flipped'' terms with $s'$ and $g'$ in the loss function in Equation~\eqref{eq:alices_loss}; we found that this new, improved version of the ALICES loss improves the sample efficiency of our algorithms. Both of these improvements are inspired by \citet{Hermans:2019ioj}. Finally, this is the first application of the ``gold mining'' idea to image data, the first combination with a convolutional network architecture, and the first use for Bayesian inference. Although machine learning-based methods have previously been proposed for inferring strong lensing host parameters~\citep{1708.08842,1708.08843,1808.00011} and for lensed source reconstruction~\citep{1901.01359}, this paper represents the first proposed application of machine learning for dark matter substructure inference in strong lenses and, as far as we are aware, for substructure inference in general.
Expand Down Expand Up @@ -485,7 +484,7 @@ \section{Conclusions}

We are currently at the dawn of a new era in observational cosmology, when ongoing and upcoming surveys---\eg, DES, LSST, \Euclid, and WFIRST---are expected to discover and deliver images of thousands of strong lensing systems. These will harbor the subtle imprint of dark matter substructure, whose characterization could hold the key to unveiling the particle nature of dark matter. In this paper, we have introduced a powerful machine learning-based method that can be used to uncover the properties of small-scale structure within these lenses and in the Universe at large. The techniques presented have the potential to maximize the information that can be extracted from a complex lens sample and zero in on signatures of new physics.

The code used to obtain the results in this paper is available at \url{https://github.com/smsharma/StrongLensing-Inference}\href{https://github.com/smsharma/StrongLensing-Inference}~\githubmaster.
The code used to obtain the results in this paper is available at \url{https://github.com/smsharma/mining-for-substructure-lens}\href{https://github.com/smsharma/mining-for-substructure-lens}~\githubmaster.

\acknowledgments

Expand All @@ -504,6 +503,8 @@ \section{Conclusions}
\package{SciPy} \citep{Jones:2001ab}.
}

\newpage

\appendix

\section{Minimum of the loss functional}
Expand Down

0 comments on commit 5593af2

Please sign in to comment.