Skip to content

Commit

Permalink
Fix missing uncertainty bug (#67)
Browse files Browse the repository at this point in the history
* Fix missing uncertainty bug

* Automatically update integration test validation results

---------

Co-authored-by: robbibt <[email protected]>
  • Loading branch information
robbibt and robbibt authored Mar 4, 2024
1 parent cbdc270 commit afb94dd
Show file tree
Hide file tree
Showing 4 changed files with 30 additions and 12 deletions.
39 changes: 28 additions & 11 deletions intertidal/elevation.py
Original file line number Diff line number Diff line change
Expand Up @@ -493,7 +493,13 @@ def pixel_dem_debug(


def pixel_uncertainty(
flat_ds, flat_dem, ndwi_thresh=0.1, method="mad", min_q=0.25, max_q=0.75
flat_ds,
flat_dem,
ndwi_thresh=0.1,
method="mad",
min_misclassified=3,
min_q=0.25,
max_q=0.75,
):
"""
Calculate uncertainty bounds around a modelled elevation based on
Expand Down Expand Up @@ -524,8 +530,14 @@ def pixel_uncertainty(
taking upper/lower tide height quantiles of miscalssified points.
Defaults to "mad" for Median Absolute Deviation; use "quantile"
to use quantile calculation instead.
min_misclassified : int, optional
If `method == "mad"`: This sets the minimum number of misclassified
observations required to calculate a valid MAD uncertainty. Pixels
with fewer misclassified observations will be assigned an output
uncertainty of 0 metres (reflecting how sucessfully the provided
elevation and NDWI threshold divide observations into dry and wet).
min_q, max_q : float, optional
If `method == "quantile": the minimum and maximum quantiles used
If `method == "quantile"`: the minimum and maximum quantiles used
to estimate uncertainty bounds based on misclassified points.
Defaults to interquartile range, or 0.25, 0.75. This provides a
balance between capturing the range of uncertainty at each
Expand Down Expand Up @@ -554,14 +566,26 @@ def pixel_uncertainty(
misclassified_all = misclassified_wet | misclassified_dry
misclassified_ds = flat_ds.where(misclassified_all)

# Calculate sum of misclassified points
misclassified_sum = (
misclassified_all.sum(dim="time")
.rename("misclassified_px_count")
.where(~flat_dem.elevation.isnull())
)

# Calculate uncertainty by taking the Median Absolute Deviation of
# all misclassified points.
if method == "mad":
# Calculate median of absolute deviations
# TODO: Account for large MAD on pixels with very few
# misclassified points. Set < n misclassified points to 0 MAD?
mad = abs(misclassified_ds.tide_m - flat_dem.elevation).median(dim="time")

# Set any pixels with < n misclassified points to 0 MAD. This
# avoids extreme MAD values being calculated when we have only
# a small set of misclassified observations, as well as missing
# data caused by being unable to calculate MAD on zero
# misclassified observations.
mad = mad.where(misclassified_sum >= min_misclassified, 0)

# Calculate low and high bounds
uncertainty_low = flat_dem.elevation - mad
uncertainty_high = flat_dem.elevation + mad
Expand Down Expand Up @@ -589,13 +613,6 @@ def pixel_uncertainty(
# Subtract low from high DEM to summarise uncertainy range
dem_flat_uncertainty = dem_flat_high - dem_flat_low

# Calculate sum of misclassified points
misclassified_sum = (
misclassified_all.sum(dim="time")
.rename("misclassified_px_count")
.where(~flat_dem.elevation.isnull())
)

return (
dem_flat_low,
dem_flat_high,
Expand Down
2 changes: 1 addition & 1 deletion tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Integration tests
This directory contains tests that are run to verify that DEA Intertidal code runs correctly. The ``test_intertidal.py`` file runs a small-scale full workflow analysis over an intertidal flat in the Gulf of Carpentaria using the DEA Intertidal [Command Line Interface (CLI) tools](../notebooks/Intertidal_CLI.ipynb), and compares these results against a LiDAR validation DEM to produce some simple accuracy metrics.

The latest integration test completed at **2024-03-04 13:57**. Compared to the previous run, it had an:
The latest integration test completed at **2024-03-04 15:35**. Compared to the previous run, it had an:
- RMSE accuracy of **0.14 m ( :heavy_minus_sign: no change)**
- MAE accuracy of **0.12 m ( :heavy_minus_sign: no change)**
- Bias of **0.12 m ( :heavy_minus_sign: no change)**
Expand Down
1 change: 1 addition & 0 deletions tests/validation.csv
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,4 @@ time,Correlation,RMSE,MAE,R-squared,Bias,Regression slope
2024-03-04 00:51:34.386727+00:00,0.976,0.141,0.121,0.952,0.117,1.109
2024-03-04 02:24:13.813543+00:00,0.976,0.141,0.121,0.952,0.117,1.109
2024-03-04 02:57:33.436682+00:00,0.976,0.141,0.121,0.952,0.117,1.109
2024-03-04 04:35:38.183061+00:00,0.976,0.141,0.121,0.952,0.117,1.109
Binary file modified tests/validation.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit afb94dd

Please sign in to comment.