Skip to content

Commit

Permalink
Merge branch 'main' into change_readme
Browse files Browse the repository at this point in the history
  • Loading branch information
haticekaratay authored Dec 10, 2024
2 parents ea7e4d6 + a4be97b commit 015ee69
Show file tree
Hide file tree
Showing 52 changed files with 14,580 additions and 14,410 deletions.
17 changes: 16 additions & 1 deletion .github/workflows/weekly_html_accessibility_check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,26 @@ on:
schedule:
- cron: '0 4 * * 0' # 0400 UTC every Sunday
workflow_dispatch:
inputs:
total_error_limit:
required: false
description: 'The maximum total allowed number of HTML accessibility errors. To skip this testing requirement, enter the value "-1". If not explicitly specified, the default value is "0".'
default: 0
type: string
total_warning_limit:
required: false
description: 'The maximum total allowed number of HTML accessibility warnings. To skip this testing requirement, enter the value "-1". If not explicitly specified, the default value is "0".'
default: 0
type: string

jobs:
Scheduled:
uses: spacetelescope/notebook-ci-actions/.github/workflows/html_accessibility_check.yml@main
uses: spacetelescope/notebook-ci-actions/.github/workflows/html_accessibility_check.yml@v3
with:
target_url: https://spacetelescope.github.io/${{ github.event.repository.name }}/
python-version: ${{ vars.PYTHON_VERSION }}
total_error_limit: ${{ inputs.total_error_limit || 0 }}
total_warning_limit: ${{ inputs.total_warning_limit || 0 }}

secrets:
A11YWATCH_TOKEN: ${{ secrets.A11YWATCH_TOKEN }}
3 changes: 0 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,3 @@
```

The repository is currently under construction and not yet complete. As a result, it is recommended that the [older version](https://github.com/spacetelescope/notebooks) be reviewed and referenced instead. While this may cause inconvenience, it is important to ensure that the final version is of the highest quality possible.

# notebook_ci_template
Structural template for notebook CI system
1 change: 1 addition & 0 deletions _toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ parts:
- file: notebooks/STIS/view_data/view_data.ipynb
- file: notebooks/STIS/extraction/1D_Extraction.ipynb
- file: notebooks/STIS/low_count_uncertainties/Low_Count_Uncertainties.ipynb
- file: notebooks/STIS/contrast_sensitivity/STIS_Coronagraphic_Observation_Feasibility.ipynb
- caption: WFC3
chapters:
- file: notebooks/WFC3/README.md
Expand Down
2 changes: 2 additions & 0 deletions notebooks/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,5 @@ MAST/Kepler/Kepler_TPF/mastDownload/
!**/skyfile.txt
!**/exclusions.txt
!**/inclusions*.txt
!STIS/contrast_sensitivity/ETC_example.jpg
!STIS/contrast_sensitivity/TWHya.txt
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,6 @@
"- *os* for setting environment variables\n",
"- *shutil* for managing directories\n",
"- *numpy* for math and array calculations\n",
"- *collections OrderedDict* for making dictionaries easily\n",
"- *matplotlib pyplot* for plotting\n",
"- *matplotlib.colors LogNorm* for scaling images\n",
"- *astropy.io fits* for working with FITS files\n",
Expand All @@ -110,10 +109,10 @@
"import os\n",
"import shutil\n",
"import numpy as np\n",
"from collections import OrderedDict\n",
"import matplotlib.pyplot as plt\n",
"from astropy.io import fits\n",
"from photutils import datasets\n",
"from astropy.modeling.models import Gaussian2D\n",
"from photutils.datasets import make_noise_image, make_model_params, make_model_image\n",
"from astroquery.mast import Observations\n",
"from acstools import acsccd\n",
"from acstools import acscte\n",
Expand Down Expand Up @@ -298,7 +297,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"First, we generate a table of random Gaussian sources of typical brightness for our 47 Tuc field with $\\mathrm{FWHM}\\sim2.5$ pixels. Because $\\mathrm{FWHM} = 2.355\\sigma$, we will generate Gaussian sources with $\\sigma \\sim 1.06$ pixels in both $x$ and $y$. "
"First, we generate a table of random Gaussian sources of typical brightness for our 47 Tuc field with $\\mathrm{FWHM}\\sim2.5$ pixels. Because $\\mathrm{FWHM} = 2.355\\sigma$, we will generate Gaussian sources with $\\sigma \\sim 1.06$ pixels in both $x$ and $y$. We get use the shape of one of the flc image SCI extensions for creating the (x, y) coordinates of the sources."
]
},
{
Expand All @@ -307,27 +306,31 @@
"metadata": {},
"outputs": [],
"source": [
"n_sources = 300\n",
"param_ranges = [('amplitude', [500, 30000]),\n",
" ('x_mean', [0, 4095]),\n",
" ('y_mean', [0, 2047]),\n",
" ('x_stddev', [1.05, 1.07]),\n",
" ('y_stddev', [1.05, 1.07]),\n",
" ('theta', [0, np.pi])]\n",
"wfc2 = fits.getdata('jd0q14ctq_flc.fits', ext=1)\n",
"\n",
"param_ranges = OrderedDict(param_ranges)\n",
"shape = wfc2.shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"n_sources = 300\n",
"\n",
"sources = datasets.make_random_gaussians_table(n_sources, param_ranges, \n",
" seed=12345)\n",
"sources = make_model_params(shape, n_sources, x_name='x_mean', y_name='y_mean',\n",
" amplitude=(500, 30000), x_stddev=(1.05, 1.07), \n",
" y_stddev=(1.05, 1.07), theta=(0, np.pi), seed=12345)\n",
"\n",
"print(sources)"
"sources"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we get the shape of one of the `flc` image `SCI` extensions and make an image from the table of Gaussian sources. Note that this step may take a few minutes. Finally, we run the synthetic image through a Poisson sampler in order to simulate the Poisson noise of the scene."
"Next, we make an image from the table of Gaussian sources. Finally, we run the synthetic image through a Poisson sampler in order to simulate the Poisson noise of the scene."
]
},
{
Expand All @@ -336,11 +339,9 @@
"metadata": {},
"outputs": [],
"source": [
"wfc2 = fits.getdata('jd0q14ctq_flc.fits', ext=1)\n",
"\n",
"shape = wfc2.shape\n",
"\n",
"synth_stars_image = datasets.make_gaussian_sources_image(shape, sources)\n",
"model = Gaussian2D()\n",
"synth_stars_image = make_model_image(shape, model, sources, \n",
" x_name='x_mean', y_name='y_mean', progress_bar=True)\n",
"\n",
"synth_stars_image = np.random.poisson(synth_stars_image)"
]
Expand Down Expand Up @@ -382,7 +383,7 @@
"ax.imshow(flc, vmin=0, vmax=200, interpolation='nearest', cmap='Greys_r', origin='lower')\n",
"\n",
"ax.set_xlim(2000, 2800)\n",
"ax.set_ylim(1200, 1700)"
"ax.set_ylim(800, 1300)"
]
},
{
Expand Down Expand Up @@ -442,7 +443,7 @@
" markerfacecolor='none', markeredgecolor='red', linestyle='none')\n",
"\n",
"ax.set_xlim(2000, 2800)\n",
"ax.set_ylim(1200, 1700)"
"ax.set_ylim(800, 1300)"
]
},
{
Expand Down Expand Up @@ -722,9 +723,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"metadata": {},
"outputs": [],
"source": [
"flt_stars = fits.getdata('jd0q14ctq_stars_ctefmod_flt.fits', ext=1)\n",
Expand All @@ -737,15 +736,13 @@
" markerfacecolor='none', markeredgecolor='red', linestyle='none')\n",
"\n",
"ax.set_xlim(2000, 2800)\n",
"ax.set_ylim(1200, 1700)"
"ax.set_ylim(800, 1300)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"metadata": {},
"outputs": [],
"source": [
"flc_stars = fits.getdata('jd0q14ctq_stars_ctefmod_flc.fits', ext=1)\n",
Expand All @@ -758,7 +755,7 @@
" markerfacecolor='none', markeredgecolor='red', linestyle='none')\n",
"\n",
"ax.set_xlim(2000, 2800)\n",
"ax.set_ylim(1200, 1700)"
"ax.set_ylim(800, 1300)"
]
},
{
Expand Down Expand Up @@ -829,7 +826,7 @@
"metadata": {},
"outputs": [],
"source": [
"noise_image = datasets.make_noise_image(shape, distribution='poisson', mean=40, seed=12345)\n",
"noise_image = make_noise_image(shape, distribution='poisson', mean=40, seed=12345)\n",
"\n",
"wfc1 += noise_image + synth_stars_image\n",
"wfc2 += noise_image + synth_stars_image\n",
Expand Down Expand Up @@ -860,7 +857,7 @@
" markerfacecolor='none', markeredgecolor='red', linestyle='none')\n",
"\n",
"ax.set_xlim(2000, 2800)\n",
"ax.set_ylim(1200, 1700)"
"ax.set_ylim(800, 1300)"
]
},
{
Expand Down Expand Up @@ -986,14 +983,14 @@
"rn_C = hdr['READNSEC']\n",
"rn_D = hdr['READNSED']\n",
"\n",
"img_rn_A = datasets.make_noise_image((shape[0], int(shape[1]/2)), distribution='gaussian', \n",
" mean=0., stddev=rn_A)\n",
"img_rn_B = datasets.make_noise_image((shape[0], int(shape[1]/2)), distribution='gaussian', \n",
" mean=0., stddev=rn_B)\n",
"img_rn_C = datasets.make_noise_image((shape[0], int(shape[1]/2)), distribution='gaussian', \n",
" mean=0., stddev=rn_C)\n",
"img_rn_D = datasets.make_noise_image((shape[0], int(shape[1]/2)), distribution='gaussian', \n",
" mean=0., stddev=rn_D)\n",
"img_rn_A = make_noise_image((shape[0], int(shape[1]/2)), distribution='gaussian', \n",
" mean=0., stddev=rn_A)\n",
"img_rn_B = make_noise_image((shape[0], int(shape[1]/2)), distribution='gaussian', \n",
" mean=0., stddev=rn_B)\n",
"img_rn_C = make_noise_image((shape[0], int(shape[1]/2)), distribution='gaussian', \n",
" mean=0., stddev=rn_C)\n",
"img_rn_D = make_noise_image((shape[0], int(shape[1]/2)), distribution='gaussian', \n",
" mean=0., stddev=rn_D)\n",
"\n",
"wfc1_rn = np.hstack((img_rn_A, img_rn_B))\n",
"wfc2_rn = np.hstack((img_rn_C, img_rn_D))"
Expand Down Expand Up @@ -1290,7 +1287,7 @@
" markerfacecolor='none', markeredgecolor='red', linestyle='none')\n",
"\n",
"ax.set_xlim(2000, 2800)\n",
"ax.set_ylim(1200, 1700)"
"ax.set_ylim(800, 1300)"
]
},
{
Expand All @@ -1309,7 +1306,7 @@
" markerfacecolor='none', markeredgecolor='red', linestyle='none')\n",
"\n",
"ax.set_xlim(2000, 2800)\n",
"ax.set_ylim(1200, 1700)"
"ax.set_ylim(800, 1300)"
]
},
{
Expand Down Expand Up @@ -1365,7 +1362,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.12"
"version": "3.12.7"
}
},
"nbformat": 4,
Expand Down
4 changes: 2 additions & 2 deletions notebooks/ACS/acs_cte_forward_model/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,6 @@ astropy>=5.3.3
astroquery>=0.4.6
matplotlib>=3.7.0
numpy>=1.23.4
photutils>=1.6.0
photutils>=2.0.2
crds>=11.17.7
stsci.tools>=4.1.0
stsci.tools>=4.1.0
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
"import matplotlib.colors as colors\n",
"import matplotlib.pyplot as plt\n",
"\n",
"# import the focus_diverse_psfs functions from acstools \n",
"# import the focus_diverse_psfs functions from acstools\n",
"from acstools.focus_diverse_epsfs import psf_retriever, multi_psf_retriever, interp_epsf"
]
},
Expand All @@ -57,7 +57,7 @@
"id": "c5ce3a38",
"metadata": {},
"source": [
"Let's begin with downloading the focus-diverse ePSF FITS file that matches a single observation of our choosing. For this example, we will aim to retrieve the ePSF file for the observation rootname \"jds408jsq\", from GO-15445 (PI W. Keel). \n",
"Let's begin with downloading the focus-diverse ePSF FITS file that matches a single observation of our choosing. For this example, we will aim to retrieve the ePSF file for the observation rootname \"jds408jsq\", from GO-15445 (PI W. Keel).\n",
"\n",
"Please note that only IPPPSSOOT formats will work (e.g. jds408jsq), and the tool does not support inputs in the form of association IDs or product names (e.g. jds408010 or jds408011).\n",
"\n",
Expand Down Expand Up @@ -140,7 +140,7 @@
"\n",
"# open the file with astropy.io\n",
"with fits.open(retrieved_filepath) as hdu:\n",
" hdu.info() # Display basic information about the file"
" hdu.info() # Display basic information about the file"
]
},
{
Expand All @@ -165,11 +165,12 @@
"\n",
"\n",
"def show_ePSF(grid_index):\n",
" plt.imshow(ePSFs[grid_index], cmap='viridis', norm=colors.LogNorm(vmin=1e-4), origin='lower')\n",
" plt.imshow(ePSFs[grid_index], cmap='viridis',\n",
" norm=colors.LogNorm(vmin=1e-4), origin='lower')\n",
" cbar = plt.colorbar()\n",
" cbar.set_label('Fractional Energy')\n",
" \n",
" \n",
"\n",
"\n",
"widgets.interact(show_ePSF, grid_index=(0, 89, 1))"
]
},
Expand Down Expand Up @@ -277,7 +278,7 @@
"outputs": [],
"source": [
"# # use astroquery to grab ACS/WFC observations from GO-13376 (PI K. McQuinn)\n",
"obsTable = Observations.query_criteria(obs_collection='HST', proposal_id=\"13376\", \n",
"obsTable = Observations.query_criteria(obs_collection='HST', proposal_id=\"13376\",\n",
" instrument_name=\"ACS/WFC\", provenance_name=\"CALACS\")\n",
"\n",
"# retrieve the data products for the above observations\n",
Expand Down Expand Up @@ -400,7 +401,7 @@
"source": [
"# # get interpolated ePSF in detector space with specified sub-pixel shifts\n",
"P = interp_epsf(ePSFs, x, y, chip,\n",
" pixel_space=True, \n",
" pixel_space=True,\n",
" subpixel_x=0.77, subpixel_y=0.33)\n",
"\n",
"plt.imshow(P, cmap='viridis', norm=colors.LogNorm(vmin=1e-4), origin='lower')\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -762,7 +762,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.12.7"
}
},
"nbformat": 4,
Expand Down
1 change: 0 additions & 1 deletion notebooks/ACS/acs_sbc_dark_analysis/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,3 @@ astroquery>=0.4.6
drizzlepac>=3.5.1
matplotlib>=3.7.0
numpy>=1.23.4
photutils>=1.12.0 # The drizzlepac needs deprecated methods such as DAOGroup.
4 changes: 2 additions & 2 deletions notebooks/DrizzlePac/align_mosaics/align_mosaics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"\n",
"## Learning Goals\n",
"\n",
"By the end of this notebook tutorial, you will:\n",
"By the end of this notebook tutorial, you will: \n",
"\n",
"- Download WFC3 UVIS & IR images with `astroquery`\n",
"- Check the active WCS (world coordinate system) solution in the FITS images\n",
Expand Down Expand Up @@ -1244,7 +1244,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
"version": "3.12.5"
}
},
"nbformat": 4,
Expand Down
1 change: 0 additions & 1 deletion notebooks/DrizzlePac/align_mosaics/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,3 @@ drizzlepac>=3.5.1
ipython>=8.11.0
matplotlib>=3.7.0
numpy>=1.23.4
photutils==1.12.0 # The drizzlepac needs deprecated methods such as DAOGroup.
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,3 @@ ipython
matplotlib
numpy
jupyter
photutils==1.12.0
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@
"## Introduction <a id=\"intro\"></a>\n",
"[Table of Contents](#toc)\n",
"\n",
"This notebook demonstrates aligning long exposures which have relatively few stars and a large number of cosmic rays. It is based on the example described in the ISR linked here ([ACS ISR 2015-04: Basic Use of SExtractor Catalogs With TweakReg - I](https://ui.adsabs.harvard.edu/abs/2015acs..rept....4L/abstract)), but uses a much simpler methodology.\n",
"This notebook demonstrates aligning long exposures which have relatively few stars and a large number of cosmic rays. It is based on the example described in the ISR linked here ([ACS ISR 2015-04: Basic Use of SExtractor Catalogs With TweakReg - I](https://ui.adsabs.harvard.edu/abs/2015acs..rept....4L/abstract)), but uses a much simpler methodology. \n",
"\n",
"Rather than making use of external software (e.g. [SExtractor](http://www.astromatic.net/software/sextractor)) and going through the extra steps to create 'cosmic-ray cleaned' images for each visit, this notebook demonstrates new features in `TweakReg` designed to mitigate false detections.\n",
"\n",
Expand Down Expand Up @@ -1013,7 +1013,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.6"
"version": "3.12.5"
}
},
"nbformat": 4,
Expand Down
Loading

0 comments on commit 015ee69

Please sign in to comment.