Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion about exposure level basic reduction using LAGER data #1

Open
dr-guangtou opened this issue Dec 9, 2020 · 0 comments
Open
Assignees
Labels
documentation Improvements or additions to documentation help wanted Extra attention is needed

Comments

@dr-guangtou
Copy link
Collaborator

dr-guangtou commented Dec 9, 2020

  • Prepared by Song Huang: 2020-12-09

  • Data structure on tiger and lux:

    • Uploading the LAGER data to tiger [On-going]
    • Downloading Gaia DR2 catalogs from cori to tiger. [Done]
    • How to sync data between tiger and lux?
    • Propose a data structure
      • merian/raw
      • merian/hsc
      • merian/coadd
      • merian/results
      • merian/astrometry
      • merian/external/decals
      • merian/external/lager
  • Exposure level

    • caterpillar: start a Exposure class

      • Organize all the metadata in the primary header
      • Visualize the CCDs: organize each CCD into a Polygon shape. This polygon can be used to test whether the CCD covers a coordinate, or whether the polygon touches one brick or patch.
      • Summarize the metadata from each image HDU into a catalog. Needs to combine with necessary information from the primary HDU
      • Visualize each exposure: color-coded by properties of each CCD.
      • Visualize the whole focal plane [TODO]
      • Can "expand" the image HDUs into individual files:
        • How to name files? _img, _msk, _inv
        • Inverse variance image needs to be cleaned: there are pixel with negative values
        • The image is in ADU unit without background subtraction
    • caterpillar: start a Dataset class to hold the necessary information about a dataset. A dataset can be defined as all the exposures from one survey, like LAGER, or exposures from one night of Merian observation.

      • For a dataset like LAGER, can output a summary catalog for the whole observation.
      • Should indicate where are the calibration data: astrometric calibration data, filter response curves, photometric calibration related information (color terms; template spectra), and even ancillary data.
      • Organize the metadata into a CCD table for the whole dataset
      • Organize all the polygon regions for each CCD into a data structure
      • Design a way to query CCD in certain area; or just get to certain CCD.
      • Visualize the exposures: check the footprint, check for gaps, evaluate "uniformity".
    • caterpillar: start a Image class:

      • Each image class should hold the image, data quality mask, and weight map
      • Can generate QA plot to show the three images.
      • This is the place where all the basic actions to the CCD data can be applied.
      • Gather the astrometric and photometric calibration datasets:
        • PS-1 catalog stars: photometry
        • Gaia DR2 catalog stars: for astrometric calibration and bright star mask
          • TODO: Do we need to do something about the galaxies in Gaia data
          • TODO: Correct for the epoch difference; estimate the RA, Dec at the time of observation.
        • Gaia EDR3: we have the simple FITS catalog, not in the astrometry.net format.
        • Point sources from HSC data release.
    • CCD basic reduction actions:

      • Background subtraction: using metadata; using simple median or mean; using sep or other astropy methods.
      • Cosmic ray removal: astroscrappy seems to work fine
        • Still need to find the balance between "remove all possible CR" and "don't affect real galaxies". Need more tests, and more visual inspection (VI). (This is not just for LAGER, for Merian we need to do the same) [Need help. Everyone?]
        • Need to update the CR mask to the data quality mask. [TODO]
      • Identify satellite trails: [Need help. Joseph?]
        • Now some are present in the data quality mask, but we don't know specifically. Need more VI
        • Can try Hough transformation algorithm.
        • Also need to think about the ways to "correct" them.
        • Add a new mask plane and mask bit for it.
      • Detection: sep detection:
        • Detect all objects on the CR and satellite trail removed images
        • Match to Gaia and PS-1 stars.
        • Perform simple aperture photometry: very initial check of astrometric calibration and photometric calibration using color terms
          • Visualize the center shift between DECam and Gaia, PS-1, and HSC
          • Using the initial photometric zeropoint provided in the metadata, establish the basic relation between DECam and PS-1, Gaia, and HSC photometry [Need help. Yi-Fei?]
      • Initial point source photometry and PSF model [TODO]
        • Depends on whether we can model stars on single exposure.
        • There are DIMM seeing information in the primary header (most times), can be used as a prior? Or use the FWHM provided in the header of each HDU.
        • Use PSFex or something simpler?
    • Visual Insepction [TODO]

      • We can do this using the LAGER data.
      • I will generate QA plot for each CCD:
        • Can toggle mask on and off.
        • Look for optical defects that are not reflected in the data quality mask or the weight map
        • Look for cosmic rays and satellite trails that are not masked out.
        • Can provide a fiducial cosmic ray removed image to check.
        • Can toggle Gaia or PS-1 stars on and off (?)
        • Provide the DECaLS or HSC images to compare with (?)
        • How to share data and keep notes? Would be great to have an interactive website (QA plots can be hosted at tiger or lux) [Need help? Maybe a student with this experience]
    • Points to discussion:

      • An important question to answer is: can we calibrate individual exposure? Or we have to stack them first?
        • This affects our observing strategy: if it is better to calibrate the coadd, then we need to make sure we can get a useful area covered to full depth in one night (minimize the change of observing condition, and make sure we can have calibrated coadd data to work on).
          - We should be able to do that.
      • The use of HSC data:
        • If we need to calibrate each CCD, we can gather the HSC coadd in that specific CCD area and make direct comparison.
        • We can homogenize the seeing and rebin the HSC images into the Merian one, and check the "differential image".
        • We can also get the hscPipe detection catalogs.
        • Right now I am doing this using unagi and NAOJ server. But on tiger, we can take advantage of the local HSC data. However, this option is not available on lux.
        • Do we want to trust the HSC pipeline catalog? Or do we want to do our own detection (e.g. iterative detection and tractor modeling like for the DECaLS DR9 reduction)? We need to figure out this: find a sample of dwarfs, and check the catalog results of different HSC pipeline output [Need help. Diana?]
@dr-guangtou dr-guangtou added documentation Improvements or additions to documentation help wanted Extra attention is needed labels Dec 9, 2020
@dr-guangtou dr-guangtou self-assigned this Dec 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant