Skip to content

Commit

Permalink
Updating pyCSEP docs for commit c13a77e from refs/heads/main by fabio…
Browse files Browse the repository at this point in the history
…lsilva
  • Loading branch information
fabiolsilva committed Oct 23, 2024
0 parents commit 6f368fe
Show file tree
Hide file tree
Showing 457 changed files with 69,277 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 66e26ed9a37ab31bfcec705443fb245a
tags: 645f666f9bcd5a90fca523b33c5a78b7
Empty file added .nojekyll
Empty file.
1 change: 1 addition & 0 deletions CNAME
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
docs.cseptesting.org
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Empty README.md for documentation cache.
Original file line number Diff line number Diff line change
@@ -0,0 +1,230 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Grid-based Forecast Evaluation\n\nThis example demonstrates how to evaluate a grid-based and time-independent forecast. Grid-based\nforecasts assume the variability of the forecasts is Poissonian. Therefore, Poisson-based evaluations\nshould be used to evaluate grid-based forecasts.\n\nOverview:\n 1. Define forecast properties (time horizon, spatial region, etc).\n 2. Obtain evaluation catalog\n 3. Apply Poissonian evaluations for grid-based forecasts\n 4. Store evaluation results using JSON format\n 5. Visualize evaluation results\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load required libraries\n\nMost of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the\n:mod:`csep.utils` subpackage.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import csep\nfrom csep.core import poisson_evaluations as poisson\nfrom csep.utils import datasets, time_utils, plots\n\n# Needed to show plots from the terminal\nimport matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define forecast properties\n\nWe choose a `time-independent-forecast` to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note,\nthe start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts\nbecause they can be rescale to any arbitrary time period.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from csep.utils.stats import get_Kagan_I1_score\n\nstart_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0')\nend_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load forecast\n\nFor this example, we provide the example forecast data set along with the main repository. The filepath is relative\nto the root directory of the package. You can specify any file location for your forecasts.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"forecast = csep.load_gridded_forecast(datasets.helmstetter_aftershock_fname,\n start_date=start_date,\n end_date=end_date,\n name='helmstetter_aftershock')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load evaluation catalog\n\nWe will download the evaluation catalog from ComCat (this step requires an internet connection). We can use the ComCat API\nto filter the catalog in both time and magnitude. See the catalog filtering example, for more information on how to\nfilter the catalog in space and time manually.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"Querying comcat catalog\")\ncatalog = csep.query_comcat(forecast.start_time, forecast.end_time, min_magnitude=forecast.min_magnitude)\nprint(catalog)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Filter evaluation catalog in space\n\nWe need to remove events in the evaluation catalog outside the valid region specified by the forecast.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"catalog = catalog.filter_spatial(forecast.region)\nprint(catalog)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compute Poisson spatial test\n\nSimply call the :func:`csep.core.poisson_evaluations.spatial_test` function to evaluate the forecast using the specified\nevaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose\noption prints the status of the simulations to the standard output.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"spatial_test_result = poisson.spatial_test(forecast, catalog)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Store evaluation results\n\nPyCSEP provides easy ways of storing objects to a JSON format using :func:`csep.write_json`. The evaluations can be read\nback into the program for plotting using :func:`csep.load_evaluation_result`.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"csep.write_json(spatial_test_result, 'example_spatial_test.json')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plot spatial test results\n\nWe provide the function :func:`csep.utils.plotting.plot_poisson_consistency_test` to visualize the evaluation results from\nconsistency tests.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"ax = plots.plot_poisson_consistency_test(spatial_test_result,\n plot_args={'xlabel': 'Spatial likelihood'})\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plot ROC Curves\n\nWe can also plot the Receiver operating characteristic (ROC) Curves based on forecast and testing-catalog.\nIn the figure below, True Positive Rate is the normalized cumulative forecast rate, after sorting cells in decreasing order of rate.\nThe \u201cFalse Positive Rate\u201d is the normalized cumulative area.\nThe dashed line is the ROC curve for a uniform forecast, meaning the likelihood for an earthquake to occur at any position is the same.\nThe further the ROC curve of a forecast is to the uniform forecast, the specific the forecast is.\nWhen comparing the forecast ROC curve against a catalog, one can evaluate if the forecast is more or less specific (or smooth) at different level or seismic rate.\n\nNote: This figure just shows an example of plotting an ROC curve with a catalog forecast.\n If \"linear=True\" the diagram is represented using a linear x-axis.\n If \"linear=False\" the diagram is represented using a logarithmic x-axis.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"Plotting concentration ROC curve\")\n_= plots.plot_concentration_ROC_diagram(forecast, catalog, linear=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Plot ROC and Molchan curves using the alarm-based approach\n -----------------------\nIn this script, we generate ROC diagrams and Molchan diagrams using the alarm-based approach to evaluate the predictive\nperformance of models. This method exploits contingency table analysis to evaluate the predictive capabilities of\nforecasting models. By analysing the contingency table data, we determine the ROC curve and Molchan trajectory and\nestimate the Area Skill Score to assess the accuracy and reliability of the prediction models. The generated graphs\nvisually represent the prediction performance.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Note: If \"linear=True\" the diagram is represented using a linear x-axis.\n# If \"linear=False\" the diagram is represented using a logarithmic x-axis.\n\nprint(\"Plotting ROC curve from the contingency table\")\n# Set linear True to obtain a linear x-axis, False to obtain a logical x-axis.\n_ = plots.plot_ROC_diagram(forecast, catalog, linear=True)\n\nprint(\"Plotting Molchan curve from the contingency table and the Area Skill Score\")\n# Set linear True to obtain a linear x-axis, False to obtain a logical x-axis.\n_ = plots.plot_Molchan_diagram(forecast, catalog, linear=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Calculate Kagan's I_1 score\n\nWe can also get the Kagan's I_1 score for a gridded forecast\n(see Kagan, YanY. [2009] Testing long-term earthquake forecasts: likelihood methods and error diagrams, Geophys. J. Int., v.177, pages 532-542).\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"I_1 = get_Kagan_I1_score(forecast, catalog)\nprint(\"I_1score is: \", I_1)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.20"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Binary file not shown.
Loading

0 comments on commit 6f368fe

Please sign in to comment.