In this repo I resume work done at the 2018 Geophysics Sprint (organized by Agile Scientific ahead of the Annual SEG conference), which resulted in the error flag demo notebook in the repo in-bruges.
The inspiration came after watching How to be a statistical detective, a Stanford University webinar.
I presented this work as a lightning talk I gave at the Transform 2020 virtual conference organized by Software Underground.
The notebook demonstrates how one can asses the quality of multiple predictions against one another (from Inversions and/or Machine Learning models) and against ground truth (for example from geophysical logs), by:
- Calculating the difference between prediction and ground truth at each sample
- Flagging statistically significant differences based on a user-defined distance (in deviation units) from either the mean difference or the median difference
- Further characterizing the error as percentage within a specific zone, and with a cofindence interval via Boostrapping
To create the conda environment for this tutorial run:
conda env create -f environment.yml
The figure used is copyright of the Canadian Society of Exploration Geophysicists.
The rest is open source, and copyright of Matteo Niccoli, 2020:
Text is licensed under a CC BY Creative Commons License.
Code is Licensed under the Apache License, Version 2.0.
A notebook demonstrating how to quantitatively QC the results of seismic inversion or other geophysical processes from a paper.