Skip to content

Commit

Permalink
Merge pull request #105 from geminiplanetimager/unit-testing
Browse files Browse the repository at this point in the history
Unit testing PR to merge
  • Loading branch information
mperrin authored Oct 12, 2016
2 parents 9245e06 + 3c702a5 commit 696eee3
Show file tree
Hide file tree
Showing 15 changed files with 425 additions and 2 deletions.
4 changes: 2 additions & 2 deletions backbone/gpi_launch_pipeline.pro
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@
; KEYWORDS:
;
; /noexit Don't exit IDL after the pipeline is closed
; /rescanDB Create a new calibrations DB by reading all files in a given directory
; /rescanDB Rescan & recreate the calibrations DB by reading all files in
; the given directory on startup
; /flushqueue DELETE any DRFs present in the queue on startup (dangerous,
; mostly for debugging purposes)
; /rescan Rescan & recreate the calibration files DB on startup
; /verbose Display more output than usual, mostly for debugging
; purposes
; /ignoreconflict Don't stop running if another instance is already running.
Expand Down
28 changes: 28 additions & 0 deletions documentation/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,34 @@ licensing issues before continuing (or install the compiled version of the
pipeline!).


I'm seeing a cryptic error about "ERROR: could not get a lock on the inter-IDL queue semaphore after 10 tries." What does this mean and how can I fix it?
---------------------------------------------------------------------------------------------------------------------------------------------------------------


The full error message looks like::

ERROR: could not get a lock on the inter-IDL queue semaphore after 10 tries.
Failed to queue command gpitv, 'something_or_other.fits'
If the lock on the inter-IDL queue is not being released properly,
use the pipeline setting launcher_force_semaphore_name to pick a different lock.


The GPI pipeline uses 2 different IDL sessions: one for the actual execution of reduction recipes, and one for running the GUIs like GPITV. These send messages back and forth to each other using shared
memory and a Unix semaphore for interprocess communication. Sometimes, for reasons that are unclear, when you restart the pipeline it cannot get a write lock to the default semaphore file name. Exactly why this
happens is frustratingly unclear, but we think has to do with some past IDL session not properly releasing the handle even after the process has exited. The exact details remain murky, hidden deep under layers
of Unix arcana.

In any case there is an easy work around: just tell the pipeline to use some other semaphore name for communicating between the two IDL sessions. Edit your :ref:`config-textfiles` user config file (probably named ``~/.gpi_pipeline_settings`` in your home directory) to specify some other semaphore name by invoking the setting mentioned in the error message::

launcher_force_semaphore_name Type_pretty_much_any_arbitrary_string_here

Type, well, pretty much anything you want there for the second part. Then restart the pipeline and the error should be cleared.

For some reason, this problem seems to crop up more often on the Gemini summit computers than anywhere else. (?!?)




.. _faq_gpitv:

GPItv
Expand Down
1 change: 1 addition & 0 deletions primitives/gpi_flexure_crosscorrelation_with_polcal.pro
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,7 @@ function gpi_flexure_crosscorrelation_with_polcal, DataSet, Modules, Backbone

if (badpix eq 1) then begin
badpix_file = (backbone_comm->getgpicaldb())->get_best_cal_from_header('badpix',*(dataset.headersphu)[numfile],*(dataset.headersext)[numfile])
if strc(badpix_file) eq '-1' then return, error('FAILURE ('+functionName+'): no bad pixel file avalable but badpix flag is set')
badpix = gpi_READFITS(badpix_file)
ones = bytarr(2048,2048)+1
badpix = ones-badpix
Expand Down
21 changes: 21 additions & 0 deletions tests/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Copied from OSIRIS DRP tests - needs update for GPI still
test_*/*.fits
test_*/contrast

# DRP Log files.
test_*/*.log
test_*/*.log.txt

# DRFs which are used by an active queue.
test_*/*.done
test_*/*.waiting
test_*/*.failed

# Master DRP logs
logs/*

# Need to solve the big-data in repo problem.
test_*/expected/*

*.pyc
*/__pycache__/*
114 changes: 114 additions & 0 deletions tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@

# GPI DRP automated testing

This directory implements automated tests for the GPI Data Reduction Pipeline



## Running tests

This test setup is a work in progress, aimed at members of the GPIES team. It assumes you have access to the
GPIES team Dropbox area, in which we store data files for tests. Anyone outside the GPI team who is interested in
helping with test development or running these tests themselves should contact Kate Follette or Marshall Perrin.

1. Ensure your copy of the GPIES dropbox is syncing the top-level folder "Pipeline_Releases"

2. Define an environment variable ``$GPIES_DROPBOX_ROOT`` to point to the file
path of your installed copy of the Dropbox (GPI) folder. This will probably be
something like "/Users/yourusername/Dropbox (GPI)" on mac OS, but may differ
depending on your installation choices. (I.e. this variable should point to the
parent directory containing the "Pipeline_Releases" directory mentioned just
above.)

3. Turn on file overwriting in the pipeline settings so that the pipeline silently overwrites files instead of prompting you to ask to overwrite the file. This is because prompts that ask you to overwrite a file does not work in /nogui mode. In the ``config/pipeline_settings.txt`` configuration file, you can turn on overwriting duplicate files with the following line: ``file_overwrite_handling overwrite``.

4. You run the test suite by changing to the tests directory and invoking py.test from the command line:

```
cd pipeline/tests
py.test
```

4. After all tests run (which will take a few minutes), pytest will output the
number of passed (or failed) tests. Test output files (FITS files, contrast
curves, and pipeline log files) will be written into each of the ``test_n``
subdirectories. You can manually examine them if desired.




## Writing a new test

Tests are defined by a directory containing a GPI pipeline reduction recipe and a python test script
compatible with the pytest framework, plus a separate matching directory containing the relevant FITS files.

To write a new test:

1. Make a new directory with a unique test name. For this example, we
will call it "mytest".
```%> mkdir test_mytest
```

2. Copy ``test_spec/test_spec.py`` as a template.
```%> cp test_spec/test_spec.py test_mytest/test_mytest.py
```

3. Make an XML Recipe file to process raw data into the cube(s) you will use
for testing. See ``test_spec/test_spec.xml`` as an example. You can make these
in the usual way with the GPI Recipe Editor. Note, you should try to name the
recipe file to match the test case name.

You should then manually edit that recipe to customize a few directory paths.
In particular, set the input directory using the ``$GPIES_DROPBOX_ROOT`` environment
variable, and set the output directory to ``"."``:
```
InputDir="${GPIES_DROPBOX_ROOT}/Pipeline_Releases/pipeline_testing_dataset/test_pol" OutputDir="."
```

4a. Please **do not** check any FITS files into the git repository. Instead,
place them inside ``Dropbox (GPI)/Pipeline_Releases/pipeline_testing_dataset/test_mytest``.
The directory name here on Dropbox *must* exactly match the name of the directory created in
the source code tests in step 1 above.

4b. You should also create a subdirectory ``cal_files`` inside that directory on Dropbox. Place inside it
ALL calibration files required to reduce your test dataset. Note that no other files in
your usual pipeline calibrations directory will be visible or accessible to the test suite! This
is necessary to ensure precise repeatability of tests.

5. Edit your new file ``test_mytest.py`` to set the ``recipename`` variable to
the name of the XML file you just created. If desired, modify the test function to
add any additional test assertions which you want to apply to the output files.

6. Try running your test:
```%> cd pipeline/tests
%> py.test test_mytest
```
Iterate, adjusting the test setup as needed until the test passes as desired.

7. Check in the recipe XML file and Python test function file into git.
```%> git add test_mytest
%> git commit -m "Adding a new test: mytest"
```


## Requirements

Python packages:
* pytest
* astropy
* gpipy (not on PyPI, must install from https://github.com/geminiplanetimager/gpipy/)

The test data assumes you have access to the GPI Exoplanet Survey shared Dropbox area.

And a working copy of the latest GPI data reduction pipeline of course.


## Credits

By Kate Follette and Marshall Perrin

Inspired by and partially adapted from the Keck OSIRIS DRP
unit tests: https://github.com/Keck-DataReductionPipelines/OsirisDRP/tree/develop



4 changes: 4 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# -*- coding: utf-8 -*-
# Pytest configuration file.

from drptests.fixtures import *
1 change: 1 addition & 0 deletions tests/drptests/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# GPI DRP tests infrastructure
36 changes: 36 additions & 0 deletions tests/drptests/diff.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# -*- coding: utf-8 -*-
# Forked from diff.py in github.com/Keck-DataReductionPipelines/OsirisDRP/

from astropy.extern import six
from astropy.io.fits.diff import FITSDiff
from astropy.io import fits
from contextlib import nested
from astropy.extern.six.moves import StringIO

__all__ = ['fits_osiris_allclose']

def fits_gpi_allclose(a, b):
"""Assert that two GPI fits files are close."""

a = fits.open(a)
b = fits.open(b)

try:
del a[0].header['COMMENT']
del b[0].header['COMMENT']

report = StringIO()
diff = FITSDiff(
a, b,
ignore_keywords=["COMMENT"],
ignore_comments=["SIMPLE"],
ignore_fields=[],
ignore_blanks=True,
ignore_blank_cards=True,
tolerance=1e-5)
diff.report(fileobj=report)
assert diff.identical, report.getvalue()

finally:
a.close()
b.close()
55 changes: 55 additions & 0 deletions tests/drptests/fixtures.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
import inspect
import os

import gpipy

import pytest


# Test fixtures for GPI pipeline testing.
# This file containts common test infrastructure functions which should
# be made available to all of the individual tests, via the py.test
# Fixtures functionality.

__all__ = ['pipeline', 'test_dir', 'patch_pipeline_dir_env_vars']

@pytest.fixture(scope="module")
def pipeline(request):
"""Fixture for preparing a handle to the pipeline. This returns an
instance that can be used to run a given recipe."""
pipeline = gpipy.PipelineDriver()
return pipeline


@pytest.fixture
def test_dir(request):
"""Fixture for returning directory name of a given test,
and making that the current working directory.
This is used to ensure that pipeline outputs
"""
dirname = os.path.dirname(str(request.fspath))
os.chdir(dirname)
return dirname

@pytest.fixture(autouse=True)
def patch_pipeline_dir_env_vars(monkeypatch,request):
""" Override default directories with test directories"""

# Write pipeline log files to the local directory for each test

monkeypatch.setenv("GPI_DRP_LOG_DIR", os.path.dirname(str(request.fspath)) )

# Use a special directory for calibrations, set up to support these tests

gpiesroot = os.getenv("GPIES_DROPBOX_ROOT")
if gpiesroot is not None:

# Each test directory set up here in git has a corresponding directory
# on Dropbox, which contains the relevant input data file and calibration
# files. Figure out the directory name used for the python test file:
mydirname = os.path.basename(os.path.dirname(str(request.fspath)))
# and from that set the CALIBRATIONS_DIR to the corresponding one

my_cal_dir = os.path.join(gpiesroot,"Pipeline_Releases","pipeline_testing_dataset", mydirname)
monkeypatch.setenv("GPI_CALIBRATIONS_DIR", my_cal_dir)
3 changes: 3 additions & 0 deletions tests/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
astropy
pytest
six
36 changes: 36 additions & 0 deletions tests/test_pol/HR4796_PolTest.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
<?xml version="1.0" encoding="UTF-8"?>
<recipe Name="Basic Polarization Sequence (From Raw Data)" ReductionType="PolarimetricScience" ShortName="basicpolsequence">
<!-- recipe written by kbf on subaru.mobile.asu.edu at 2016-09-14T21:57:27 UTC -->
<!-- created with the Recipe Editor GUI -->
<dataset InputDir="${GPIES_DROPBOX_ROOT}/Pipeline_Releases/pipeline_testing_dataset/test_pol" OutputDir=".">
<fits FileName="S20140422S0280.fits.gz" />
<fits FileName="S20140422S0281.fits.gz" />
<fits FileName="S20140422S0282.fits.gz" />
<fits FileName="S20140422S0283.fits.gz" />
<fits FileName="S20140422S0284.fits.gz" />
<fits FileName="S20140422S0285.fits.gz" />
<fits FileName="S20140422S0286.fits.gz" />
<fits FileName="S20140422S0287.fits.gz" />
<fits FileName="S20140422S0288.fits.gz" />
<fits FileName="S20140422S0289.fits.gz" />
<fits FileName="S20140422S0290.fits.gz" />
<fits FileName="S20140422S0291.fits.gz" />
</dataset>
<primitive name="Load Polarimetry Spot Calibration" CalibrationFile="AUTOMATIC" />
<primitive name="Smooth polarization calibration" Boxsize="10" />
<primitive name="Subtract Dark Background" CalibrationFile="AUTOMATIC" RequireExactMatch="0" Interpolate="0" Save="0" gpitv="0" />
<primitive name="Flexure 2D x correlation with polcal" method="Auto" range="0.3" resolution="0.01" psf_sep="0.01" stopidl="0" configuration="tight" x_off="0" y_off="0" badpix="1" iterate="1" max_iter="15" manual_dx="0." manual_dy="0." />
<primitive name="Destripe science image" method="calfile" abort_fraction="0.9" chan_offset_correction="1" readnoise_floor="0.0" Save_stripes="0" Display="-1" remove_microphonics="1" method_microphonics="1" CalibrationFile="AUTOMATIC" Plot_micro_peaks="no" save_microphonics="no" micro_threshold="0.01" write_mask="0" fraction="0.7" Save="0" gpitv="0" />
<primitive name="Interpolate bad pixels in 2D frame" CalibrationFile="AUTOMATIC" method="all8" Save="0" gpitv="0" negative_bad_thresh="-50" before_and_after="0" usedq="0" />
<primitive name="Assemble Polarization Cube" Save="0" gpitv="0" Method="PSF" />
<primitive name="Divide by Low Spatial Freq. Polarized Flat Field" CalibrationFile="AUTOMATIC" Save="1" gpitv="2" />
<primitive name="Interpolate bad pixels in cube" method="NEW" threshold="1.2" Save="0" gpitv="2" before_and_after="0" />
<primitive name="Measure Star Position for Polarimetry" x0="140" y0="140" search_window="5" mask_radius="50" highpass="1" lower_threshold="-100" Save="0" gpitv="0" />
<primitive name="Measure Satellite Spot Flux in Polarimetry" Save="1" gpitv="0" Aperture="4" Inskyrad="6" Outskyrad="9" ShowAperture="0" FindPSFCENT="0" STARXCEN="145" STARYCEN="148" Companion="0" StarXPos="98" StarYPos="121" StarAperture="8" StarInnerSkyRad="12" StarOuterSkyRad="16" Verbose="0" />
<primitive name="Accumulate Images" Method="OnDisk" />
<primitive name="Clean Polarization Pairs via Double Difference" fix_badpix="1" Save_diffbias="0" gpitv_diffbias="10" Save="0" debug="0" />
<primitive name="Subtract Mean Stellar Polarization" Method="Auto" InnerRadius="-1" OuterRadius="20" Fraction="1" WriteToFile="0" Filename="Stellar_Pol_Stokes.txt" Save="1" gpitv="2" />
<primitive name="Rotate North Up" Rot_Method="CUBIC" Center_Method="HEADERS" centerx="140" centery="140" pivot="0" Save="0" gpitv="0" />
<primitive name="Combine Polarization Sequence" HWPoffset="-29.14" IncludeSystemMueller="0" IncludeSkyRotation="1" PerfectHWP="0" Save="1" gpitv="10" />
</recipe>

24 changes: 24 additions & 0 deletions tests/test_pol/template_recipe_pol_basicpolsequence.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
<?xml version="1.0" encoding="UTF-8"?>
<recipe Name="Basic Polarization Sequence (From Raw Data)" ReductionType="PolarimetricScience" ShortName="basicpolsequence">
<!-- recipe written by max on at 2015-05-03T05:24:30 UTC -->
<!-- created with the Recipe Editor GUI -->
<dataset InputDir="${GPI_REDUCED_DATA_DIR}150423" OutputDir="${GPI_REDUCED_DATA_DIR}150423">
</dataset>
<primitive name="Load Polarimetry Spot Calibration" CalibrationFile="AUTOMATIC" />
<primitive name="Smooth polarization calibration" Boxsize="10" />
<primitive name="Subtract Dark Background" CalibrationFile="AUTOMATIC" RequireExactMatch="0" Interpolate="0" Save="0" gpitv="0" />
<primitive name="Flexure 2D x correlation with polcal" method="Auto" range="0.3" resolution="0.01" psf_sep="0.01" stopidl="0" configuration="tight" x_off="0" y_off="0" badpix="1" iterate="1" max_iter="15" manual_dx="0." manual_dy="0." />
<primitive name="Destripe science image" method="calfile" abort_fraction="0.9" chan_offset_correction="1" readnoise_floor="0.0" Save_stripes="0" Display="-1" remove_microphonics="1" method_microphonics="1" CalibrationFile="AUTOMATIC" Plot_micro_peaks="no" save_microphonics="no" micro_threshold="0.01" write_mask="0" fraction="0.7" Save="0" gpitv="0" />
<primitive name="Interpolate bad pixels in 2D frame" CalibrationFile="AUTOMATIC" method="all8" Save="0" gpitv="0" negative_bad_thresh="-50" before_and_after="0" />
<primitive name="Assemble Polarization Cube" Save="0" gpitv="0" Method="PSF" />
<primitive name="Divide by Low Spatial Freq. Polarized Flat Field" CalibrationFile="AUTOMATIC" Save="1" gpitv="2" />
<primitive name="Interpolate bad pixels in cube" Save="0" gpitv="2" before_and_after="0" />
<primitive name="Measure Star Position for Polarimetry" x0="140" y0="140" search_window="5" mask_radius="50" highpass="1" lower_threshold="-100" Save="0" gpitv="0" />
<primitive name="Measure Satellite Spot Flux in Polarimetry" Save="1" gpitv="0" Aperture="4" Inskyrad="6" Outskyrad="9" ShowAperture="0" FindPSFCENT="0" STARXCEN="145" STARYCEN="148" Companion="0" StarXPos="98" StarYPos="121" StarAperture="8" StarInnerSkyRad="12" StarOuterSkyRad="16" Verbose="0" />
<primitive name="Accumulate Images" Method="OnDisk" />
<primitive name="Clean Polarization Pairs via Double Difference" fix_badpix="1" Save_diffbias="0" gpitv_diffbias="10" Save="0" debug="0" />
<primitive name="Subtract Mean Stellar Polarization" Method="Auto" InnerRadius="-1" OuterRadius="20" Fraction="1" WriteToFile="0" Filename="Stellar_Pol_Stokes.txt" Save="1" gpitv="2" />
<primitive name="Rotate North Up" Rot_Method="CUBIC" Center_Method="HEADERS" centerx="140" centery="140" pivot="0" Save="0" gpitv="0" />
<primitive name="Combine Polarization Sequence" HWPoffset="-29.14" IncludeSystemMueller="0" IncludeSkyRotation="1" PerfectHWP="0" Save="1" gpitv="10" />
</recipe>

31 changes: 31 additions & 0 deletions tests/test_pol/test_pol.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import os
import gpipy

recipename = 'HR4796_PolTest.xml'
nfiles_expected = 14


def test_pol_hr4796(pipeline, test_dir):
"""
End-to-end test for pol mode cube reduction.
Reduce GPI commissioning HR 4796A dataset,
combine individual pol cubes into a Stokes cube,
and detect the disk in polarized light.
"""

status, outrecipe, outfiles = pipeline.run_recipe( os.path.join(test_dir, recipename), rescanDB=True)

# did the pipeline run without error?
assert status=='Success', "Recipe {} failed to execute".format(recipename)

# Did we get the output files we expected?
assert len(outfiles)==nfiles_expected, "Number of output files does not match expected value."
assert "./S20140422S0291_stokesdc.fits" in outfiles, "Output files didn't contain the expected Stokes cube"

# Are the contents of that file what we expected?
cube = gpipy.read("./S20140422S0291_stokesdc.fits")
assert cube.filetype=='Stokes Cube', "Wrong output file type"
assert cube.shape[0]==4, "Wrong cube dimensions"

# TODO write more tests here looking at actual pixel values

Loading

0 comments on commit 696eee3

Please sign in to comment.