diff --git a/.buildinfo b/.buildinfo new file mode 100644 index 00000000..a570e8d2 --- /dev/null +++ b/.buildinfo @@ -0,0 +1,4 @@ +# Sphinx build info version 1 +# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. +config: 66e26ed9a37ab31bfcec705443fb245a +tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/CNAME b/CNAME new file mode 100644 index 00000000..26acf4d9 --- /dev/null +++ b/CNAME @@ -0,0 +1 @@ +docs.cseptesting.org diff --git a/README.md b/README.md new file mode 100644 index 00000000..f63467aa --- /dev/null +++ b/README.md @@ -0,0 +1 @@ +Empty README.md for documentation cache. diff --git a/_downloads/008ccb012945e14d1d427ac6d41dd460/gridded_forecast_evaluation.ipynb b/_downloads/008ccb012945e14d1d427ac6d41dd460/gridded_forecast_evaluation.ipynb new file mode 100644 index 00000000..3a7d922c --- /dev/null +++ b/_downloads/008ccb012945e14d1d427ac6d41dd460/gridded_forecast_evaluation.ipynb @@ -0,0 +1,230 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n\n# Grid-based Forecast Evaluation\n\nThis example demonstrates how to evaluate a grid-based and time-independent forecast. Grid-based\nforecasts assume the variability of the forecasts is Poissonian. Therefore, Poisson-based evaluations\nshould be used to evaluate grid-based forecasts.\n\nOverview:\n 1. Define forecast properties (time horizon, spatial region, etc).\n 2. Obtain evaluation catalog\n 3. Apply Poissonian evaluations for grid-based forecasts\n 4. Store evaluation results using JSON format\n 5. Visualize evaluation results\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load required libraries\n\nMost of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the\n:mod:`csep.utils` subpackage.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "import csep\nfrom csep.core import poisson_evaluations as poisson\nfrom csep.utils import datasets, time_utils, plots\n\n# Needed to show plots from the terminal\nimport matplotlib.pyplot as plt" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Define forecast properties\n\nWe choose a `time-independent-forecast` to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note,\nthe start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts\nbecause they can be rescale to any arbitrary time period.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "from csep.utils.stats import get_Kagan_I1_score\n\nstart_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0')\nend_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load forecast\n\nFor this example, we provide the example forecast data set along with the main repository. The filepath is relative\nto the root directory of the package. You can specify any file location for your forecasts.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "forecast = csep.load_gridded_forecast(datasets.helmstetter_aftershock_fname,\n start_date=start_date,\n end_date=end_date,\n name='helmstetter_aftershock')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load evaluation catalog\n\nWe will download the evaluation catalog from ComCat (this step requires an internet connection). We can use the ComCat API\nto filter the catalog in both time and magnitude. See the catalog filtering example, for more information on how to\nfilter the catalog in space and time manually.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "print(\"Querying comcat catalog\")\ncatalog = csep.query_comcat(forecast.start_time, forecast.end_time, min_magnitude=forecast.min_magnitude)\nprint(catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Filter evaluation catalog in space\n\nWe need to remove events in the evaluation catalog outside the valid region specified by the forecast.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "catalog = catalog.filter_spatial(forecast.region)\nprint(catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compute Poisson spatial test\n\nSimply call the :func:`csep.core.poisson_evaluations.spatial_test` function to evaluate the forecast using the specified\nevaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose\noption prints the status of the simulations to the standard output.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "spatial_test_result = poisson.spatial_test(forecast, catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Store evaluation results\n\nPyCSEP provides easy ways of storing objects to a JSON format using :func:`csep.write_json`. The evaluations can be read\nback into the program for plotting using :func:`csep.load_evaluation_result`.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "csep.write_json(spatial_test_result, 'example_spatial_test.json')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Plot spatial test results\n\nWe provide the function :func:`csep.utils.plotting.plot_poisson_consistency_test` to visualize the evaluation results from\nconsistency tests.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "ax = plots.plot_poisson_consistency_test(spatial_test_result,\n plot_args={'xlabel': 'Spatial likelihood'})\nplt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Plot ROC Curves\n\nWe can also plot the Receiver operating characteristic (ROC) Curves based on forecast and testing-catalog.\nIn the figure below, True Positive Rate is the normalized cumulative forecast rate, after sorting cells in decreasing order of rate.\nThe \u201cFalse Positive Rate\u201d is the normalized cumulative area.\nThe dashed line is the ROC curve for a uniform forecast, meaning the likelihood for an earthquake to occur at any position is the same.\nThe further the ROC curve of a forecast is to the uniform forecast, the specific the forecast is.\nWhen comparing the forecast ROC curve against a catalog, one can evaluate if the forecast is more or less specific (or smooth) at different level or seismic rate.\n\nNote: This figure just shows an example of plotting an ROC curve with a catalog forecast.\n If \"linear=True\" the diagram is represented using a linear x-axis.\n If \"linear=False\" the diagram is represented using a logarithmic x-axis.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "print(\"Plotting concentration ROC curve\")\n_= plots.plot_concentration_ROC_diagram(forecast, catalog, linear=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Plot ROC and Molchan curves using the alarm-based approach\n -----------------------\nIn this script, we generate ROC diagrams and Molchan diagrams using the alarm-based approach to evaluate the predictive\nperformance of models. This method exploits contingency table analysis to evaluate the predictive capabilities of\nforecasting models. By analysing the contingency table data, we determine the ROC curve and Molchan trajectory and\nestimate the Area Skill Score to assess the accuracy and reliability of the prediction models. The generated graphs\nvisually represent the prediction performance.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# Note: If \"linear=True\" the diagram is represented using a linear x-axis.\n# If \"linear=False\" the diagram is represented using a logarithmic x-axis.\n\nprint(\"Plotting ROC curve from the contingency table\")\n# Set linear True to obtain a linear x-axis, False to obtain a logical x-axis.\n_ = plots.plot_ROC_diagram(forecast, catalog, linear=True)\n\nprint(\"Plotting Molchan curve from the contingency table and the Area Skill Score\")\n# Set linear True to obtain a linear x-axis, False to obtain a logical x-axis.\n_ = plots.plot_Molchan_diagram(forecast, catalog, linear=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Calculate Kagan's I_1 score\n\nWe can also get the Kagan's I_1 score for a gridded forecast\n(see Kagan, YanY. [2009] Testing long-term earthquake forecasts: likelihood methods and error diagrams, Geophys. J. Int., v.177, pages 532-542).\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "I_1 = get_Kagan_I1_score(forecast, catalog)\nprint(\"I_1score is: \", I_1)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.20" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file diff --git a/_downloads/0978f64941f864a1837eeea2933b21e1/working_with_catalog_forecasts.zip b/_downloads/0978f64941f864a1837eeea2933b21e1/working_with_catalog_forecasts.zip new file mode 100644 index 00000000..15afea03 Binary files /dev/null and b/_downloads/0978f64941f864a1837eeea2933b21e1/working_with_catalog_forecasts.zip differ diff --git a/_downloads/0bc4fb5c821feba53f8fc0efc0a6cd74/gridded_forecast_evaluation.py b/_downloads/0bc4fb5c821feba53f8fc0efc0a6cd74/gridded_forecast_evaluation.py new file mode 100644 index 00000000..4e38ad2d --- /dev/null +++ b/_downloads/0bc4fb5c821feba53f8fc0efc0a6cd74/gridded_forecast_evaluation.py @@ -0,0 +1,162 @@ +""" + +.. _grid-forecast-evaluation: + +Grid-based Forecast Evaluation +============================== + +This example demonstrates how to evaluate a grid-based and time-independent forecast. Grid-based +forecasts assume the variability of the forecasts is Poissonian. Therefore, Poisson-based evaluations +should be used to evaluate grid-based forecasts. + +Overview: + 1. Define forecast properties (time horizon, spatial region, etc). + 2. Obtain evaluation catalog + 3. Apply Poissonian evaluations for grid-based forecasts + 4. Store evaluation results using JSON format + 5. Visualize evaluation results +""" + +#################################################################################################################################### +# Load required libraries +# ----------------------- +# +# Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +# :mod:`csep.utils` subpackage. + +import csep +from csep.core import poisson_evaluations as poisson +from csep.utils import datasets, time_utils, plots + +# Needed to show plots from the terminal +import matplotlib.pyplot as plt + +#################################################################################################################################### +# Define forecast properties +# -------------------------- +# +# We choose a :ref:`time-independent-forecast` to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note, +# the start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts +# because they can be rescale to any arbitrary time period. +from csep.utils.stats import get_Kagan_I1_score + +start_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0') +end_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0') + +#################################################################################################################################### +# Load forecast +# ------------- +# +# For this example, we provide the example forecast data set along with the main repository. The filepath is relative +# to the root directory of the package. You can specify any file location for your forecasts. + +forecast = csep.load_gridded_forecast(datasets.helmstetter_aftershock_fname, + start_date=start_date, + end_date=end_date, + name='helmstetter_aftershock') + +#################################################################################################################################### +# Load evaluation catalog +# ----------------------- +# +# We will download the evaluation catalog from ComCat (this step requires an internet connection). We can use the ComCat API +# to filter the catalog in both time and magnitude. See the catalog filtering example, for more information on how to +# filter the catalog in space and time manually. + +print("Querying comcat catalog") +catalog = csep.query_comcat(forecast.start_time, forecast.end_time, min_magnitude=forecast.min_magnitude) +print(catalog) + +#################################################################################################################################### +# Filter evaluation catalog in space +# ---------------------------------- +# +# We need to remove events in the evaluation catalog outside the valid region specified by the forecast. + +catalog = catalog.filter_spatial(forecast.region) +print(catalog) + +#################################################################################################################################### +# Compute Poisson spatial test +# ---------------------------- +# +# Simply call the :func:`csep.core.poisson_evaluations.spatial_test` function to evaluate the forecast using the specified +# evaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose +# option prints the status of the simulations to the standard output. + +spatial_test_result = poisson.spatial_test(forecast, catalog) + +#################################################################################################################################### +# Store evaluation results +# ------------------------ +# +# PyCSEP provides easy ways of storing objects to a JSON format using :func:`csep.write_json`. The evaluations can be read +# back into the program for plotting using :func:`csep.load_evaluation_result`. + +csep.write_json(spatial_test_result, 'example_spatial_test.json') + +#################################################################################################################################### +# Plot spatial test results +# ------------------------- +# +# We provide the function :func:`csep.utils.plotting.plot_poisson_consistency_test` to visualize the evaluation results from +# consistency tests. + +ax = plots.plot_poisson_consistency_test(spatial_test_result, + plot_args={'xlabel': 'Spatial likelihood'}) +plt.show() + +#################################################################################################################################### +# Plot ROC Curves +# --------------- +# +# We can also plot the Receiver operating characteristic (ROC) Curves based on forecast and testing-catalog. +# In the figure below, True Positive Rate is the normalized cumulative forecast rate, after sorting cells in decreasing order of rate. +# The “False Positive Rate” is the normalized cumulative area. +# The dashed line is the ROC curve for a uniform forecast, meaning the likelihood for an earthquake to occur at any position is the same. +# The further the ROC curve of a forecast is to the uniform forecast, the specific the forecast is. +# When comparing the forecast ROC curve against a catalog, one can evaluate if the forecast is more or less specific (or smooth) at different level or seismic rate. +# +# Note: This figure just shows an example of plotting an ROC curve with a catalog forecast. +# If "linear=True" the diagram is represented using a linear x-axis. +# If "linear=False" the diagram is represented using a logarithmic x-axis. + + +print("Plotting concentration ROC curve") +_= plots.plot_concentration_ROC_diagram(forecast, catalog, linear=True) + + + + +#################################################################################################################################### +# Plot ROC and Molchan curves using the alarm-based approach +# ----------------------- +#In this script, we generate ROC diagrams and Molchan diagrams using the alarm-based approach to evaluate the predictive +#performance of models. This method exploits contingency table analysis to evaluate the predictive capabilities of +#forecasting models. By analysing the contingency table data, we determine the ROC curve and Molchan trajectory and +#estimate the Area Skill Score to assess the accuracy and reliability of the prediction models. The generated graphs +#visually represent the prediction performance. + +# Note: If "linear=True" the diagram is represented using a linear x-axis. +# If "linear=False" the diagram is represented using a logarithmic x-axis. + +print("Plotting ROC curve from the contingency table") +# Set linear True to obtain a linear x-axis, False to obtain a logical x-axis. +_ = plots.plot_ROC_diagram(forecast, catalog, linear=True) + +print("Plotting Molchan curve from the contingency table and the Area Skill Score") +# Set linear True to obtain a linear x-axis, False to obtain a logical x-axis. +_ = plots.plot_Molchan_diagram(forecast, catalog, linear=True) + + + + +#################################################################################################################################### +# Calculate Kagan's I_1 score +# --------------------------- +# +# We can also get the Kagan's I_1 score for a gridded forecast +# (see Kagan, YanY. [2009] Testing long-term earthquake forecasts: likelihood methods and error diagrams, Geophys. J. Int., v.177, pages 532-542). + +I_1 = get_Kagan_I1_score(forecast, catalog) +print("I_1score is: ", I_1) diff --git a/_downloads/0eb6ac76c59fc7312b00d5ba3019490d/plot_customizations.py b/_downloads/0eb6ac76c59fc7312b00d5ba3019490d/plot_customizations.py new file mode 100644 index 00000000..87e4de2c --- /dev/null +++ b/_downloads/0eb6ac76c59fc7312b00d5ba3019490d/plot_customizations.py @@ -0,0 +1,199 @@ +""" +Plot customizations +=================== + +This example shows how to include some advanced options in the spatial visualization +of Gridded Forecasts and Evaluation Results + +Overview: + 1. Define optional plotting arguments + 2. Set extent of maps + 3. Visualizing selected magnitude bins + 4. Plot global maps + 5. Plot multiple Evaluation Results + +""" + +################################################################################################################ +# Example 1: Spatial dataset plot arguments +# ----------------------------------------- + +#################################################################################################################################### +# **Load required libraries** + +import csep +import cartopy +import numpy +from csep.utils import datasets, plots + +import matplotlib.pyplot as plt + +#################################################################################################################################### +# **Load a Grid Forecast from the datasets** +# +forecast = csep.load_gridded_forecast(datasets.hires_ssm_italy_fname, + name='Werner, et al (2010) Italy') +#################################################################################################################################### +# **Selecting plotting arguments** +# +# Create a dictionary containing the plot arguments +args_dict = {'title': 'Italy 10 year forecast', + 'grid_labels': True, + 'borders': True, + 'feature_lw': 0.5, + 'basemap': 'ESRI_imagery', + 'cmap': 'rainbow', + 'alpha_exp': 0.8, + 'projection': cartopy.crs.Mercator()} +#################################################################################################################################### +# These arguments are, in order: +# +# * Assign a title +# * Set labels to the geographic axes +# * Draw country borders +# * Set a linewidth of 0.5 to country borders +# * Select ESRI Imagery as a basemap. +# * Assign ``'rainbow'`` as colormap. Possible values from from ``matplotlib.cm`` library +# * Defines 0.8 for an exponential transparency function (default is 0 for constant alpha, whereas 1 for linear). +# * An object cartopy.crs.Projection() is passed as Projection to the map +# +# The complete description of plot arguments can be found in :func:`csep.utils.plots.plot_spatial_dataset` + +#################################################################################################################################### +# **Plotting the dataset** +# +# The map `extent` can be defined. Otherwise, the extent of the data would be used. The dictionary defined must be passed as argument + +ax = forecast.plot(extent=[3, 22, 35, 48], + show=True, + plot_args=args_dict) + +#################################################################################################################################### +# Example 2: Plot a global forecast and a selected magnitude bin range +# -------------------------------------------------------------------- +# +# +# **Load a Global Forecast from the datasets** +# +# A downsampled version of the `GEAR1 `_ forecast can be found in datasets. + +forecast = csep.load_gridded_forecast(datasets.gear1_downsampled_fname, + name='GEAR1 Forecast (downsampled)') + +#################################################################################################################################### +# **Filter by magnitudes** +# +# We get the rate of events of 5.95<=M_w<=7.5 + +low_bound = 6.15 +upper_bound = 7.55 +mw_bins = forecast.get_magnitudes() +mw_ind = numpy.where(numpy.logical_and( mw_bins >= low_bound, mw_bins <= upper_bound))[0] +rates_mw = forecast.data[:, mw_ind] + +#################################################################################################################################### +# We get the total rate between these magnitudes + +rate_sum = rates_mw.sum(axis=1) + +#################################################################################################################################### +# The data is stored in a 1D array, so it should be projected into `region` 2D cartesian grid. + +rate_sum = forecast.region.get_cartesian(rate_sum) + +#################################################################################################################################### +# **Define plot arguments** +# +# We define the arguments and a global projection, centered at $lon=-180$ + +plot_args = {'figsize': (10,6), 'coastline':True, 'feature_color':'black', + 'projection': cartopy.crs.Robinson(central_longitude=180.0), + 'title': forecast.name, 'grid_labels': False, + 'cmap': 'magma', + 'clabel': r'$\log_{10}\lambda\left(M_w \in [{%.2f},\,{%.2f}]\right)$ per ' + r'${%.1f}^\circ\times {%.1f}^\circ $ per forecast period' % + (low_bound, upper_bound, forecast.region.dh, forecast.region.dh)} + +#################################################################################################################################### +# **Plotting the dataset** +# To plot a global forecast, we must assign the option ``set_global=True``, which is required by :ref:cartopy to handle +# internally the extent of the plot + +ax = plots.plot_spatial_dataset(numpy.log10(rate_sum), forecast.region, + show=True, set_global=True, + plot_args=plot_args) + +#################################################################################################################################### + + + +#################################################################################################################################### +# Example 3: Plot a catalog +# ------------------------------------------- + +#################################################################################################################################### +# **Load a Catalog from ComCat** + +start_time = csep.utils.time_utils.strptime_to_utc_datetime('1995-01-01 00:00:00.0') +end_time = csep.utils.time_utils.strptime_to_utc_datetime('2015-01-01 00:00:00.0') +min_mag = 3.95 +catalog = csep.query_comcat(start_time, end_time, min_magnitude=min_mag, verbose=False) + +# **Define plotting arguments** +plot_args = {'basemap': 'ESRI_terrain', + 'markersize': 2, + 'markercolor': 'red', + 'alpha': 0.3, + 'mag_scale': 7, + 'legend': True, + 'legend_loc': 3, + 'mag_ticks': [4.0, 5.0, 6.0, 7.0]} + +#################################################################################################################################### +# These arguments are, in order: +# +# * Assign as basemap the ESRI_terrain webservice +# * Set minimum markersize of 2 with red color +# * Set a 0.3 transparency +# * mag_scale is used to exponentially scale the size with respect to magnitude. Recommended 1-8 +# * Set legend True and location in 3 (lower-left corner) +# * Set a list of Magnitude ticks to display in the legend +# +# The complete description of plot arguments can be found in :func:`csep.utils.plots.plot_catalog` + +#################################################################################################################################### + +# **Plot the catalog** +ax = catalog.plot(show=False, plot_args=plot_args) + + +#################################################################################################################################### +# Example 4: Plot multiple evaluation results +# ------------------------------------------- + +#################################################################################################################################### +# Load L-test results from example .json files (See +# :doc:`gridded_forecast_evaluation` for information on calculating and storing evaluation results) + +L_results = [csep.load_evaluation_result(i) for i in datasets.l_test_examples] +args = {'figsize': (6,5), + 'title': r'$\mathcal{L}-\mathrm{test}$', + 'title_fontsize': 18, + 'xlabel': 'Log-likelihood', + 'xticks_fontsize': 9, + 'ylabel_fontsize': 9, + 'linewidth': 0.8, + 'capsize': 3, + 'hbars':True, + 'tight_layout': True} + +#################################################################################################################################### +# Description of plot arguments can be found in :func:`plot_poisson_consistency_test`. +# We set ``one_sided_lower=True`` as usual for an L-test, where the model is rejected if the observed +# is located within the lower tail of the simulated distribution. +ax = plots.plot_poisson_consistency_test(L_results, one_sided_lower=True, plot_args=args) + +# Needed to show plots if running as script +plt.show() + + diff --git a/_downloads/10322ad2d70b95dda77c59916edb5071/plot_gridded_forecast.zip b/_downloads/10322ad2d70b95dda77c59916edb5071/plot_gridded_forecast.zip new file mode 100644 index 00000000..b16f82d6 Binary files /dev/null and b/_downloads/10322ad2d70b95dda77c59916edb5071/plot_gridded_forecast.zip differ diff --git a/_downloads/2515b7400434bdc3a7f98dcd18d5029f/catalog_forecast_evaluation.py b/_downloads/2515b7400434bdc3a7f98dcd18d5029f/catalog_forecast_evaluation.py new file mode 100644 index 00000000..9677f9d2 --- /dev/null +++ b/_downloads/2515b7400434bdc3a7f98dcd18d5029f/catalog_forecast_evaluation.py @@ -0,0 +1,116 @@ +""" +.. _catalog-forecast-evaluation: + +Catalog-based Forecast Evaluation +================================= + +This example shows how to evaluate a catalog-based forecasting using the Number test. This test is the simplest of the +evaluations. + +Overview: + 1. Define forecast properties (time horizon, spatial region, etc). + 2. Access catalog from ComCat + 3. Filter catalog to be consistent with the forecast properties + 4. Apply catalog-based number test to catalog + 5. Visualize results for catalog-based forecast +""" + +#################################################################################################################################### +# Load required libraries +# ----------------------- +# +# Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +# :mod:`csep.utils` subpackage. + +import csep +from csep.core import regions, catalog_evaluations +from csep.utils import datasets, time_utils + +#################################################################################################################################### +# Define start and end times of forecast +# -------------------------------------- +# +# Forecasts should define a time horizon in which they are valid. The choice is flexible for catalog-based forecasts, because +# the catalogs can be filtered to accommodate multiple end-times. Conceptually, these should be separate forecasts. + +start_time = time_utils.strptime_to_utc_datetime("1992-06-28 11:57:35.0") +end_time = time_utils.strptime_to_utc_datetime("1992-07-28 11:57:35.0") + +#################################################################################################################################### +# Define spatial and magnitude regions +# ------------------------------------ +# +# Before we can compute the bin-wise rates we need to define a spatial region and a set of magnitude bin edges. The magnitude +# bin edges # are the lower bound (inclusive) except for the last bin, which is treated as extending to infinity. We can +# bind these # to the forecast object. This can also be done by passing them as keyword arguments +# into :func:`csep.load_catalog_forecast`. + +# Magnitude bins properties +min_mw = 4.95 +max_mw = 8.95 +dmw = 0.1 + +# Create space and magnitude regions. The forecast is already filtered in space and magnitude +magnitudes = regions.magnitude_bins(min_mw, max_mw, dmw) +region = regions.california_relm_region() + +# Bind region information to the forecast (this will be used for binning of the catalogs) +space_magnitude_region = regions.create_space_magnitude_region(region, magnitudes) + +#################################################################################################################################### +# Load catalog forecast +# --------------------- +# +# To reduce the file size of this example, we've already filtered the catalogs to the appropriate magnitudes and +# spatial locations. The original forecast was computed for 1 year following the start date, so we still need to filter the +# catalog in time. We can do this by passing a list of filtering arguments to the forecast or updating the class. +# +# By default, the forecast loads catalogs on-demand, so the filters are applied as the catalog loads. On-demand means that +# until we loop over the forecast in some capacity, none of the catalogs are actually loaded. +# +# More fine-grain control and optimizations can be achieved by creating a :class:`csep.core.forecasts.CatalogForecast` directly. + +forecast = csep.load_catalog_forecast(datasets.ucerf3_ascii_format_landers_fname, + start_time = start_time, end_time = end_time, + region = space_magnitude_region, + apply_filters = True) + +# Assign filters to forecast +forecast.filters = [f'origin_time >= {forecast.start_epoch}', f'origin_time < {forecast.end_epoch}'] + +#################################################################################################################################### +# Obtain evaluation catalog from ComCat +# ------------------------------------- +# +# The :class:`csep.core.forecasts.CatalogForecast` provides a method to compute the expected number of events in spatial cells. This +# requires a region with magnitude information. +# +# We need to filter the ComCat catalog to be consistent with the forecast. This can be done either through the ComCat API +# or using catalog filtering strings. Here we'll use the ComCat API to make the data access quicker for this example. We +# still need to filter the observed catalog in space though. + +# Obtain Comcat catalog and filter to region. +comcat_catalog = csep.query_comcat(start_time, end_time, min_magnitude=forecast.min_magnitude) + +# Filter observed catalog using the same region as the forecast +comcat_catalog = comcat_catalog.filter_spatial(forecast.region) +print(comcat_catalog) + +# Plot the catalog +comcat_catalog.plot() + +#################################################################################################################################### +# Perform number test +# ------------------- +# +# We can perform the Number test on the catalog based forecast using the observed catalog we obtained from Comcat. + +number_test_result = catalog_evaluations.number_test(forecast, comcat_catalog) + +#################################################################################################################################### +# Plot number test result +# ----------------------- +# +# We can create a simple visualization of the number test from the evaluation result class. + +ax = number_test_result.plot(show=True) \ No newline at end of file diff --git a/_downloads/27a58d2ad1be9430f0a41fcd4d1b96c8/working_with_catalog_forecasts.py b/_downloads/27a58d2ad1be9430f0a41fcd4d1b96c8/working_with_catalog_forecasts.py new file mode 100644 index 00000000..4b0320cc --- /dev/null +++ b/_downloads/27a58d2ad1be9430f0a41fcd4d1b96c8/working_with_catalog_forecasts.py @@ -0,0 +1,90 @@ +""" +Working with catalog-based forecasts +==================================== + +This example shows some basic interactions with data-based forecasts. We will load in a forecast stored in the CSEP +data format, and compute the expected rates on a 0.1° x 0.1° grid covering the state of California. We will plot the +expected rates in the spatial cells. + +Overview: + 1. Define forecast properties (time horizon, spatial region, etc). + 2. Compute the expected rates in space and magnitude bins + 3. Plot expected rates in the spatial cells +""" + +#################################################################################################################################### +# Load required libraries +# ----------------------- +# +# Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +# :mod:`csep.utils` subpackage. + +import numpy + +import csep +from csep.core import regions +from csep.utils import datasets + +#################################################################################################################################### +# Load data forecast +# --------------------- +# +# PyCSEP contains some basic forecasts that can be used to test of the functionality of the package. This forecast has already +# been filtered to the California RELM region. + +forecast = csep.load_catalog_forecast(datasets.ucerf3_ascii_format_landers_fname) + +#################################################################################################################################### +# Define spatial and magnitude regions +# ------------------------------------ +# +# Before we can compute the bin-wise rates we need to define a spatial region and a set of magnitude bin edges. The magnitude +# bin edges # are the lower bound (inclusive) except for the last bin, which is treated as extending to infinity. We can +# bind these # to the forecast object. This can also be done by passing them as keyword arguments +# into :func:`csep.load_catalog_forecast`. + +# Magnitude bins properties +min_mw = 4.95 +max_mw = 8.95 +dmw = 0.1 + +# Create space and magnitude regions +magnitudes = regions.magnitude_bins(min_mw, max_mw, dmw) +region = regions.california_relm_region() + +# Bind region information to the forecast (this will be used for binning of the catalogs) +forecast.region = regions.create_space_magnitude_region(region, magnitudes) + +#################################################################################################################################### +# Compute spatial event counts +# ---------------------------- +# +# The :class:`csep.core.forecasts.CatalogForecast` provides a method to compute the expected number of events in spatial cells. This +# requires a region with magnitude information. + +_ = forecast.get_expected_rates(verbose=True) + + +#################################################################################################################################### +# Plot expected event counts +# -------------------------- +# +# We can plot the expected event counts the same way that we plot a :class:`csep.core.forecasts.GriddedForecast` + +ax = forecast.expected_rates.plot(plot_args={'clim': [-3.5, 0]}, show=True) + +#################################################################################################################################### +# The images holes in the image are due to under-sampling from the forecast. + +#################################################################################################################################### +# Quick sanity check +# ------------------ +# +# The forecasts were filtered to the spatial region so all events should be binned. We loop through each data in the forecast and +# count the number of events and compare that with the expected rates. The expected rate is an average in each space-magnitude bin, so +# we have to multiply this value by the number of catalogs in the forecast. + +total_events = 0 +for catalog in forecast: + total_events += catalog.event_count +numpy.testing.assert_allclose(total_events, forecast.expected_rates.sum() * forecast.n_cat) diff --git a/_downloads/3407447997731565b431a92b17054ced/quadtree_gridded_forecast_evaluation.py b/_downloads/3407447997731565b431a92b17054ced/quadtree_gridded_forecast_evaluation.py new file mode 100644 index 00000000..d9f58688 --- /dev/null +++ b/_downloads/3407447997731565b431a92b17054ced/quadtree_gridded_forecast_evaluation.py @@ -0,0 +1,200 @@ +""" + +.. _quadtree_gridded-forecast-evaluation: + +Quadtree Grid-based Forecast Evaluation +======================================= + +This example demonstrates how to create a quadtree based single resolution-grid and multi-resolution grid. +Multi-resolution grid is created using earthquake catalog, in which seismic density determines the size of a grid cell. +In creating a multi-resolution grid we select a threshold (:math:`N_{max}`) as a maximum number of earthquake in each cell. +In single-resolution grid, we simply select a zoom-level (L) that determines a single resolution grid. +The number of cells in single-resolution grid are equal to :math:`4^L`. The zoom-level L=11 leads to 4.2 million cells, nearest to 0.1x0.1 grid. + +We use these grids to create and evaluate a time-independent forecast. Grid-based +forecasts assume the variability of the forecasts is Poissonian. Therefore, poisson-based evaluations +should be used to evaluate grid-based forecasts defined using quadtree regions. + +Overview: + 1. Define spatial grids + - Multi-resolution grid + - Single-resolution grid + 2. Load forecasts + - Multi-resolution forecast + - Single-resolution forecast + 3. Load evaluation catalog + 4. Apply Poissonian evaluations for both grid-based forecasts + 5. Visualize evaluation results +""" + +#################################################################################################################################### +# Load required libraries +# ----------------------- +# +# Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +# :mod:`csep.utils` subpackage. +import numpy +import pandas +from csep.core import poisson_evaluations as poisson +from csep.utils import time_utils, plots +from csep.core.regions import QuadtreeGrid2D +from csep.core.forecasts import GriddedForecast +from csep.utils.time_utils import decimal_year_to_utc_epoch +from csep.core.catalogs import CSEPCatalog + +#################################################################################################################################### +# Load Training Catalog for Multi-resolution grid +# ---------------------------------------------- +# +# We define a multi-resolution quadtree using earthquake catalog. We load a training catalog in CSEP and use that catalog to create a multi-resolution grid. +# Sometimes, we do not the catalog in exact format as requried by pyCSEP. So we can read a catalog using Pandas and convert it +# into the format accepable by PyCSEP. Then we instantiate an object of class CSEPCatalog by calling function :func:`csep.core.regions.CSEPCatalog.from_dataframe` + +dfcat = pandas.read_csv('cat_train_2013.csv') +column_name_mapper = { + 'lon': 'longitude', + 'lat': 'latitude', + 'mag': 'magnitude', + 'index': 'id' + } + +# maps the column names to the dtype expected by the catalog class +dfcat = dfcat.reset_index().rename(columns=column_name_mapper) + +# create the origin_times from decimal years +dfcat['origin_time'] = dfcat.apply(lambda row: decimal_year_to_utc_epoch(row.year), axis=1) + +# create catalog from dataframe +catalog_train = CSEPCatalog.from_dataframe(dfcat) +print(catalog_train) + +#################################################################################################################################### +# Define Multi-resolution Gridded Region +# ------------------------------------------------ +# Now use define a threshold for maximum number of earthquake allowed per cell, i.e. Nmax +# and call :func:`csep.core.regions.QuadtreeGrid_from_catalog` to create a multi-resolution grid. +# For simplicity we assume only single magnitude bin, i.e. all the earthquakes greater than and equal to 5.95 + +mbins = numpy.array([5.95]) +Nmax = 25 +r_multi = QuadtreeGrid2D.from_catalog(catalog_train, Nmax, magnitudes=mbins) +print('Number of cells in Multi-resolution grid :', r_multi.num_nodes) + + +#################################################################################################################################### +# Define Single-resolution Gridded Region +# ---------------------------------------- +# +# Here as an example we define a single resolution grid at zoom-level L=6. For this purpose +# we call :func:`csep.core.regions.QuadtreeGrid2D_from_single_resolution` to create a single resolution grid. + +# For simplicity of example, we assume only single magnitude bin, +# i.e. all the earthquakes greater than and equal to 5.95 + +mbins = numpy.array([5.95]) +r_single = QuadtreeGrid2D.from_single_resolution(6, magnitudes=mbins) +print('Number of cells in Single-Resolution grid :', r_single.num_nodes) + + +#################################################################################################################################### +# Load forecast of multi-resolution grid +# --------------------------------- +# An example time-independent forecast had been created for this grid and provided the example forecast data set along with the main repository. +# We load the time-independent global forecast which has time horizon of 1 year. +# The filepath is relative to the root directory of the package. You can specify any file location for your forecasts. + +forecast_data = numpy.loadtxt('example_rate_zoom=EQ10L11.csv') +#Reshape forecast as Nx1 array +forecast_data = forecast_data.reshape(-1,1) + +forecast_multi_grid = GriddedForecast(data = forecast_data, region = r_multi, magnitudes = mbins, name = 'Example Multi-res Forecast') + +#The loaded forecast is for 1 year. The test catalog we will use to evaluate is for 6 years. So we can rescale the forecast. +print(f"expected event count before scaling: {forecast_multi_grid.event_count}") +forecast_multi_grid.scale(6) +print(f"expected event count after scaling: {forecast_multi_grid.event_count}") + + + +#################################################################################################################################### +# Load forecast of single-resolution grid +# ------------------------------------- +# We have already created a time-independent global forecast with time horizon of 1 year and provided with the reporsitory. +# The filepath is relative to the root directory of the package. You can specify any file location for your forecasts. + +forecast_data = numpy.loadtxt('example_rate_zoom=6.csv') +#Reshape forecast as Nx1 array +forecast_data = forecast_data.reshape(-1,1) + +forecast_single_grid = GriddedForecast(data = forecast_data, region = r_single, + magnitudes = mbins, name = 'Example Single-res Forecast') + +# The loaded forecast is for 1 year. The test catalog we will use is for 6 years. So we can rescale the forecast. +print(f"expected event count before scaling: {forecast_single_grid.event_count}") +forecast_single_grid.scale(6) +print(f"expected event count after scaling: {forecast_single_grid.event_count}") + + + +#################################################################################################################################### +# Load evaluation catalog +# ----------------------- +# +# We have a test catalog stored here. We can read the test catalog as a pandas frame and convert it into a format that is acceptable to PyCSEP +# Then we instantiate an object of catalog + +dfcat = pandas.read_csv('cat_test.csv') + +column_name_mapper = { + 'lon': 'longitude', + 'lat': 'latitude', + 'mag': 'magnitude' + } + +# maps the column names to the dtype expected by the catalog class +dfcat = dfcat.reset_index().rename(columns=column_name_mapper) +# create the origin_times from decimal years +dfcat['origin_time'] = dfcat.apply(lambda row: decimal_year_to_utc_epoch(row.year), axis=1) + +# create catalog from dataframe +catalog = CSEPCatalog.from_dataframe(dfcat) +print(catalog) + +#################################################################################################################################### +# Compute Poisson spatial test and Number test +# ------------------------------------------------------ +# +# Simply call the :func:`csep.core.poisson_evaluations.spatial_test` and :func:`csep.core.poisson_evaluations.number_test` functions to evaluate the forecast using the specified +# evaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose +# option prints the status of the simulations to the standard output. +# +# Note: But before we use evaluation catalog, we need to link gridded region with observed catalog. +# Since we have two different grids here, so we do it separately for both grids. + +#For Multi-resolution grid, linking region to catalog. +catalog.region = forecast_multi_grid.region +spatial_test_multi_res_result = poisson.spatial_test(forecast_multi_grid, catalog) +number_test_multi_res_result = poisson.number_test(forecast_multi_grid, catalog) + + +#For Single-resolution grid, linking region to catalog. +catalog.region = forecast_single_grid.region +spatial_test_single_res_result = poisson.spatial_test(forecast_single_grid, catalog) +number_test_single_res_result = poisson.number_test(forecast_single_grid, catalog) + + +#################################################################################################################################### +# Plot spatial test results +# ------------------------- +# +# We provide the function :func:`csep.utils.plotting.plot_poisson_consistency_test` to visualize the evaluation results from +# consistency tests. + + +stest_result = [spatial_test_single_res_result, spatial_test_multi_res_result] +ax_spatial = plots.plot_poisson_consistency_test(stest_result, + plot_args={'xlabel': 'Spatial likelihood'}) + +ntest_result = [number_test_single_res_result, number_test_multi_res_result] +ax_number = plots.plot_poisson_consistency_test(ntest_result, + plot_args={'xlabel': 'Number of Earthquakes'}) diff --git a/_downloads/3f98bf5f06991e5f8a8631aac7b760d6/plot_customizations.ipynb b/_downloads/3f98bf5f06991e5f8a8631aac7b760d6/plot_customizations.ipynb new file mode 100644 index 00000000..782f643a --- /dev/null +++ b/_downloads/3f98bf5f06991e5f8a8631aac7b760d6/plot_customizations.ipynb @@ -0,0 +1,312 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n# Plot customizations\n\nThis example shows how to include some advanced options in the spatial visualization\nof Gridded Forecasts and Evaluation Results\n\nOverview:\n 1. Define optional plotting arguments\n 2. Set extent of maps\n 3. Visualizing selected magnitude bins\n 4. Plot global maps\n 5. Plot multiple Evaluation Results\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example 1: Spatial dataset plot arguments\n\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Load required libraries**\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "import csep\nimport cartopy\nimport numpy\nfrom csep.utils import datasets, plots\n\nimport matplotlib.pyplot as plt" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Load a Grid Forecast from the datasets**\n\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "forecast = csep.load_gridded_forecast(datasets.hires_ssm_italy_fname,\n name='Werner, et al (2010) Italy')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Selecting plotting arguments**\n\nCreate a dictionary containing the plot arguments\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "args_dict = {'title': 'Italy 10 year forecast',\n 'grid_labels': True,\n 'borders': True,\n 'feature_lw': 0.5,\n 'basemap': 'ESRI_imagery',\n 'cmap': 'rainbow',\n 'alpha_exp': 0.8,\n 'projection': cartopy.crs.Mercator()}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "These arguments are, in order:\n\n* Assign a title\n* Set labels to the geographic axes\n* Draw country borders\n* Set a linewidth of 0.5 to country borders\n* Select ESRI Imagery as a basemap.\n* Assign ``'rainbow'`` as colormap. Possible values from from ``matplotlib.cm`` library\n* Defines 0.8 for an exponential transparency function (default is 0 for constant alpha, whereas 1 for linear).\n* An object cartopy.crs.Projection() is passed as Projection to the map\n\nThe complete description of plot arguments can be found in :func:`csep.utils.plots.plot_spatial_dataset`\n\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Plotting the dataset**\n\nThe map `extent` can be defined. Otherwise, the extent of the data would be used. The dictionary defined must be passed as argument\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "ax = forecast.plot(extent=[3, 22, 35, 48],\n show=True,\n plot_args=args_dict)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example 2: Plot a global forecast and a selected magnitude bin range\n\n\n**Load a Global Forecast from the datasets**\n\nA downsampled version of the [GEAR1](http://peterbird.name/publications/2015_GEAR1/2015_GEAR1.htm) forecast can be found in datasets.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "forecast = csep.load_gridded_forecast(datasets.gear1_downsampled_fname,\n name='GEAR1 Forecast (downsampled)')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Filter by magnitudes**\n\nWe get the rate of events of 5.95<=M_w<=7.5\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "low_bound = 6.15\nupper_bound = 7.55\nmw_bins = forecast.get_magnitudes()\nmw_ind = numpy.where(numpy.logical_and( mw_bins >= low_bound, mw_bins <= upper_bound))[0]\nrates_mw = forecast.data[:, mw_ind]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We get the total rate between these magnitudes\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "rate_sum = rates_mw.sum(axis=1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The data is stored in a 1D array, so it should be projected into `region` 2D cartesian grid.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "rate_sum = forecast.region.get_cartesian(rate_sum)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Define plot arguments**\n\nWe define the arguments and a global projection, centered at $lon=-180$\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "plot_args = {'figsize': (10,6), 'coastline':True, 'feature_color':'black',\n 'projection': cartopy.crs.Robinson(central_longitude=180.0),\n 'title': forecast.name, 'grid_labels': False,\n 'cmap': 'magma',\n 'clabel': r'$\\log_{10}\\lambda\\left(M_w \\in [{%.2f},\\,{%.2f}]\\right)$ per '\n r'${%.1f}^\\circ\\times {%.1f}^\\circ $ per forecast period' %\n (low_bound, upper_bound, forecast.region.dh, forecast.region.dh)}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Plotting the dataset**\nTo plot a global forecast, we must assign the option ``set_global=True``, which is required by :ref:cartopy to handle\ninternally the extent of the plot\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "ax = plots.plot_spatial_dataset(numpy.log10(rate_sum), forecast.region,\n show=True, set_global=True,\n plot_args=plot_args)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example 3: Plot a catalog\n\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Load a Catalog from ComCat**\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "start_time = csep.utils.time_utils.strptime_to_utc_datetime('1995-01-01 00:00:00.0')\nend_time = csep.utils.time_utils.strptime_to_utc_datetime('2015-01-01 00:00:00.0')\nmin_mag = 3.95\ncatalog = csep.query_comcat(start_time, end_time, min_magnitude=min_mag, verbose=False)\n\n# **Define plotting arguments**\nplot_args = {'basemap': 'ESRI_terrain',\n 'markersize': 2,\n 'markercolor': 'red',\n 'alpha': 0.3,\n 'mag_scale': 7,\n 'legend': True,\n 'legend_loc': 3,\n 'mag_ticks': [4.0, 5.0, 6.0, 7.0]}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "These arguments are, in order:\n\n* Assign as basemap the ESRI_terrain webservice\n* Set minimum markersize of 2 with red color\n* Set a 0.3 transparency\n* mag_scale is used to exponentially scale the size with respect to magnitude. Recommended 1-8\n* Set legend True and location in 3 (lower-left corner)\n* Set a list of Magnitude ticks to display in the legend\n\nThe complete description of plot arguments can be found in :func:`csep.utils.plots.plot_catalog`\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# **Plot the catalog**\nax = catalog.plot(show=False, plot_args=plot_args)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example 4: Plot multiple evaluation results\n\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Load L-test results from example .json files (See\n:doc:`gridded_forecast_evaluation` for information on calculating and storing evaluation results)\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "L_results = [csep.load_evaluation_result(i) for i in datasets.l_test_examples]\nargs = {'figsize': (6,5),\n 'title': r'$\\mathcal{L}-\\mathrm{test}$',\n 'title_fontsize': 18,\n 'xlabel': 'Log-likelihood',\n 'xticks_fontsize': 9,\n 'ylabel_fontsize': 9,\n 'linewidth': 0.8,\n 'capsize': 3,\n 'hbars':True,\n 'tight_layout': True}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Description of plot arguments can be found in :func:`plot_poisson_consistency_test`.\nWe set ``one_sided_lower=True`` as usual for an L-test, where the model is rejected if the observed\nis located within the lower tail of the simulated distribution.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "ax = plots.plot_poisson_consistency_test(L_results, one_sided_lower=True, plot_args=args)\n\n# Needed to show plots if running as script\nplt.show()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.20" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file diff --git a/_downloads/42b96bae90d22bb04943dcaeb0a8d8f6/catalog_filtering.py b/_downloads/42b96bae90d22bb04943dcaeb0a8d8f6/catalog_filtering.py new file mode 100644 index 00000000..a21937ff --- /dev/null +++ b/_downloads/42b96bae90d22bb04943dcaeb0a8d8f6/catalog_filtering.py @@ -0,0 +1,100 @@ +""" +.. tutorial-catalog-filtering + +Catalogs operations +=================== + +This example demonstrates how to perform standard operations on a catalog. This example requires an internet +connection to access ComCat. + +Overview: + 1. Load catalog from ComCat + 2. Create filtering parameters in space, magnitude, and time + 3. Filter catalog using desired filters + 4. Write catalog to standard CSEP format +""" + +#################################################################################################################################### +# Load required libraries +# ----------------------- +# +# Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +# :mod:`csep.utils` subpackage. + +import csep +from csep.core import regions +from csep.utils import time_utils, comcat +# sphinx_gallery_thumbnail_path = '_static/CSEP2_Logo_CMYK.png' + +#################################################################################################################################### +# Load catalog +# ------------ +# +# PyCSEP provides access to the ComCat web API using :func:`csep.query_comcat` and to the Bollettino Sismico Italiano +# API using :func:`csep.query_bsi`. These functions require a :class:`datetime.datetime` to specify the start and end +# dates. + +start_time = csep.utils.time_utils.strptime_to_utc_datetime('2019-01-01 00:00:00.0') +end_time = csep.utils.time_utils.utc_now_datetime() +catalog = csep.query_comcat(start_time, end_time) +print(catalog) + +#################################################################################################################################### +# Filter to magnitude range +# ------------------------- +# +# Use the :meth:`csep.core.catalogs.AbstractBaseCatalog.filter` to filter the catalog. The filter function uses the field +# names stored in the numpy structured array. Standard fieldnames include 'magnitude', 'origin_time', 'latitude', 'longitude', +# and 'depth'. + +catalog = catalog.filter('magnitude >= 3.5') +print(catalog) + +#################################################################################################################################### +# Filter to desired time interval +# ------------------------------- +# +# We need to define desired start and end times for the catalog using a time-string format. PyCSEP uses integer times for doing +# time manipulations. Time strings can be converted into integer times using +# :func:`csep.utils.time_utils.strptime_to_utc_epoch`. The :meth:`csep.core.catalog.AbstractBaseCatalog.filter` also +# accepts a list of strings to apply multiple filters. Note: The number of events may differ if this script is ran +# at a later date than shown in this example. + +# create epoch times from time-string formats +start_epoch = csep.utils.time_utils.strptime_to_utc_epoch('2019-07-06 03:19:54.040000') +end_epoch = csep.utils.time_utils.strptime_to_utc_epoch('2019-09-21 03:19:54.040000') + +# filter catalog to magnitude ranges and times +filters = [f'origin_time >= {start_epoch}', f'origin_time < {end_epoch}'] +catalog = catalog.filter(filters) +print(catalog) + +#################################################################################################################################### +# Filter to desired spatial region +# -------------------------------- +# +# We use a circular spatial region with a radius of 3 average fault lengths as defined by the Wells and Coppersmith scaling +# relationship. PyCSEP provides :func:`csep.utils.spatial.generate_aftershock_region` to create an aftershock region +# based on the magnitude and epicenter of an event. +# +# We use :func:`csep.utils.comcat.get_event_by_id` the ComCat API provided by the USGS to obtain the event information +# from the M7.1 Ridgecrest mainshock. + +m71_event_id = 'ci38457511' +event = comcat.get_event_by_id(m71_event_id) +m71_epoch = time_utils.datetime_to_utc_epoch(event.time) + +# build aftershock region +aftershock_region = regions.generate_aftershock_region(event.magnitude, event.longitude, event.latitude) + +# apply new aftershock region and magnitude of completeness +catalog = catalog.filter_spatial(aftershock_region).apply_mct(event.magnitude, m71_epoch) +print(catalog) + + +#################################################################################################################################### +# Write catalog +# ------------- +# +# Use :meth:`csep.core.catalogs.AbstractBaseCatalog.write_ascii` to write the catalog into the comma separated value format. +catalog.write_ascii('2019-11-11-comcat.csv') diff --git a/_downloads/50269c12f83eb0f9399191ec427e1672/catalog_filtering.ipynb b/_downloads/50269c12f83eb0f9399191ec427e1672/catalog_filtering.ipynb new file mode 100644 index 00000000..c6a957b4 --- /dev/null +++ b/_downloads/50269c12f83eb0f9399191ec427e1672/catalog_filtering.ipynb @@ -0,0 +1,140 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n.. tutorial-catalog-filtering\n\n# Catalogs operations\n\nThis example demonstrates how to perform standard operations on a catalog. This example requires an internet\nconnection to access ComCat.\n\nOverview:\n 1. Load catalog from ComCat\n 2. Create filtering parameters in space, magnitude, and time\n 3. Filter catalog using desired filters\n 4. Write catalog to standard CSEP format\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load required libraries\n\nMost of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the\n:mod:`csep.utils` subpackage.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "import csep\nfrom csep.core import regions\nfrom csep.utils import time_utils, comcat\n# sphinx_gallery_thumbnail_path = '_static/CSEP2_Logo_CMYK.png'" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load catalog\n\nPyCSEP provides access to the ComCat web API using :func:`csep.query_comcat` and to the Bollettino Sismico Italiano\nAPI using :func:`csep.query_bsi`. These functions require a :class:`datetime.datetime` to specify the start and end\ndates.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "start_time = csep.utils.time_utils.strptime_to_utc_datetime('2019-01-01 00:00:00.0')\nend_time = csep.utils.time_utils.utc_now_datetime()\ncatalog = csep.query_comcat(start_time, end_time)\nprint(catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Filter to magnitude range\n\nUse the :meth:`csep.core.catalogs.AbstractBaseCatalog.filter` to filter the catalog. The filter function uses the field\nnames stored in the numpy structured array. Standard fieldnames include 'magnitude', 'origin_time', 'latitude', 'longitude',\nand 'depth'.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "catalog = catalog.filter('magnitude >= 3.5')\nprint(catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Filter to desired time interval\n\nWe need to define desired start and end times for the catalog using a time-string format. PyCSEP uses integer times for doing\ntime manipulations. Time strings can be converted into integer times using\n:func:`csep.utils.time_utils.strptime_to_utc_epoch`. The :meth:`csep.core.catalog.AbstractBaseCatalog.filter` also\naccepts a list of strings to apply multiple filters. Note: The number of events may differ if this script is ran\nat a later date than shown in this example.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# create epoch times from time-string formats\nstart_epoch = csep.utils.time_utils.strptime_to_utc_epoch('2019-07-06 03:19:54.040000')\nend_epoch = csep.utils.time_utils.strptime_to_utc_epoch('2019-09-21 03:19:54.040000')\n\n# filter catalog to magnitude ranges and times\nfilters = [f'origin_time >= {start_epoch}', f'origin_time < {end_epoch}']\ncatalog = catalog.filter(filters)\nprint(catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Filter to desired spatial region\n\nWe use a circular spatial region with a radius of 3 average fault lengths as defined by the Wells and Coppersmith scaling\nrelationship. PyCSEP provides :func:`csep.utils.spatial.generate_aftershock_region` to create an aftershock region\nbased on the magnitude and epicenter of an event.\n\nWe use :func:`csep.utils.comcat.get_event_by_id` the ComCat API provided by the USGS to obtain the event information\nfrom the M7.1 Ridgecrest mainshock.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "m71_event_id = 'ci38457511'\nevent = comcat.get_event_by_id(m71_event_id)\nm71_epoch = time_utils.datetime_to_utc_epoch(event.time)\n\n# build aftershock region\naftershock_region = regions.generate_aftershock_region(event.magnitude, event.longitude, event.latitude)\n\n# apply new aftershock region and magnitude of completeness\ncatalog = catalog.filter_spatial(aftershock_region).apply_mct(event.magnitude, m71_epoch)\nprint(catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Write catalog\n\nUse :meth:`csep.core.catalogs.AbstractBaseCatalog.write_ascii` to write the catalog into the comma separated value format.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "catalog.write_ascii('2019-11-11-comcat.csv')" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.20" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file diff --git a/_downloads/51fa1488df4465555cbdd7b8b69e1267/catalog_forecast_evaluation.ipynb b/_downloads/51fa1488df4465555cbdd7b8b69e1267/catalog_forecast_evaluation.ipynb new file mode 100644 index 00000000..c7fc3c93 --- /dev/null +++ b/_downloads/51fa1488df4465555cbdd7b8b69e1267/catalog_forecast_evaluation.ipynb @@ -0,0 +1,158 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n\n# Catalog-based Forecast Evaluation\n\nThis example shows how to evaluate a catalog-based forecasting using the Number test. This test is the simplest of the\nevaluations.\n\nOverview:\n 1. Define forecast properties (time horizon, spatial region, etc).\n 2. Access catalog from ComCat\n 3. Filter catalog to be consistent with the forecast properties\n 4. Apply catalog-based number test to catalog\n 5. Visualize results for catalog-based forecast\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load required libraries\n\nMost of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the\n:mod:`csep.utils` subpackage.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "import csep\nfrom csep.core import regions, catalog_evaluations\nfrom csep.utils import datasets, time_utils" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Define start and end times of forecast\n\nForecasts should define a time horizon in which they are valid. The choice is flexible for catalog-based forecasts, because\nthe catalogs can be filtered to accommodate multiple end-times. Conceptually, these should be separate forecasts.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "start_time = time_utils.strptime_to_utc_datetime(\"1992-06-28 11:57:35.0\")\nend_time = time_utils.strptime_to_utc_datetime(\"1992-07-28 11:57:35.0\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Define spatial and magnitude regions\n\nBefore we can compute the bin-wise rates we need to define a spatial region and a set of magnitude bin edges. The magnitude\nbin edges # are the lower bound (inclusive) except for the last bin, which is treated as extending to infinity. We can\nbind these # to the forecast object. This can also be done by passing them as keyword arguments\ninto :func:`csep.load_catalog_forecast`.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# Magnitude bins properties\nmin_mw = 4.95\nmax_mw = 8.95\ndmw = 0.1\n\n# Create space and magnitude regions. The forecast is already filtered in space and magnitude\nmagnitudes = regions.magnitude_bins(min_mw, max_mw, dmw)\nregion = regions.california_relm_region()\n\n# Bind region information to the forecast (this will be used for binning of the catalogs)\nspace_magnitude_region = regions.create_space_magnitude_region(region, magnitudes)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load catalog forecast\n\nTo reduce the file size of this example, we've already filtered the catalogs to the appropriate magnitudes and\nspatial locations. The original forecast was computed for 1 year following the start date, so we still need to filter the\ncatalog in time. We can do this by passing a list of filtering arguments to the forecast or updating the class.\n\nBy default, the forecast loads catalogs on-demand, so the filters are applied as the catalog loads. On-demand means that\nuntil we loop over the forecast in some capacity, none of the catalogs are actually loaded.\n\nMore fine-grain control and optimizations can be achieved by creating a :class:`csep.core.forecasts.CatalogForecast` directly.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "forecast = csep.load_catalog_forecast(datasets.ucerf3_ascii_format_landers_fname,\n start_time = start_time, end_time = end_time,\n region = space_magnitude_region,\n apply_filters = True)\n\n# Assign filters to forecast\nforecast.filters = [f'origin_time >= {forecast.start_epoch}', f'origin_time < {forecast.end_epoch}']" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Obtain evaluation catalog from ComCat\n\nThe :class:`csep.core.forecasts.CatalogForecast` provides a method to compute the expected number of events in spatial cells. This \nrequires a region with magnitude information.\n\nWe need to filter the ComCat catalog to be consistent with the forecast. This can be done either through the ComCat API\nor using catalog filtering strings. Here we'll use the ComCat API to make the data access quicker for this example. We\nstill need to filter the observed catalog in space though.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# Obtain Comcat catalog and filter to region.\ncomcat_catalog = csep.query_comcat(start_time, end_time, min_magnitude=forecast.min_magnitude)\n\n# Filter observed catalog using the same region as the forecast\ncomcat_catalog = comcat_catalog.filter_spatial(forecast.region)\nprint(comcat_catalog)\n\n# Plot the catalog\ncomcat_catalog.plot()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Perform number test\n\nWe can perform the Number test on the catalog based forecast using the observed catalog we obtained from Comcat.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "number_test_result = catalog_evaluations.number_test(forecast, comcat_catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Plot number test result\n\nWe can create a simple visualization of the number test from the evaluation result class.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "ax = number_test_result.plot(show=True)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.20" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file diff --git a/_downloads/75e1f8926d0310e25148d4553776be18/plot_gridded_forecast.ipynb b/_downloads/75e1f8926d0310e25148d4553776be18/plot_gridded_forecast.ipynb new file mode 100644 index 00000000..805fc064 --- /dev/null +++ b/_downloads/75e1f8926d0310e25148d4553776be18/plot_gridded_forecast.ipynb @@ -0,0 +1,104 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n# Plotting gridded forecast\n\nThis example show you how to load a gridded forecast stored in the default ASCII format.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load required libraries\n\nMost of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the\n:mod:`csep.utils` subpackage.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "import csep\nfrom csep.utils import datasets, time_utils" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Define forecast properties\n\nWe choose a `time-independent-forecast` to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note,\nthe start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts\nbecause they can be rescale to any arbitrary time period.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "start_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0')\nend_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load forecast\n\nFor this example, we provide the example forecast data set along with the main repository. The filepath is relative\nto the root directory of the package. You can specify any file location for your forecasts.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "forecast = csep.load_gridded_forecast(datasets.helmstetter_mainshock_fname,\n start_date=start_date,\n end_date=end_date,\n name='helmstetter_mainshock')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Plot forecast\n\nThe forecast object provides :meth:`csep.core.forecasts.GriddedForecast.plot` to plot a gridded forecast. This function\nreturns a matplotlib axes, so more specific attributes can be set on the figure.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "ax = forecast.plot(show=True)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.20" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file diff --git a/_downloads/89d8257ecc015d8ebf1890ed8afc2929/quadtree_gridded_forecast_evaluation.zip b/_downloads/89d8257ecc015d8ebf1890ed8afc2929/quadtree_gridded_forecast_evaluation.zip new file mode 100644 index 00000000..3aeff9dc Binary files /dev/null and b/_downloads/89d8257ecc015d8ebf1890ed8afc2929/quadtree_gridded_forecast_evaluation.zip differ diff --git a/_downloads/994277a2ceb325fb5622fe31d83884bd/gridded_forecast_evaluation.zip b/_downloads/994277a2ceb325fb5622fe31d83884bd/gridded_forecast_evaluation.zip new file mode 100644 index 00000000..ba18955f Binary files /dev/null and b/_downloads/994277a2ceb325fb5622fe31d83884bd/gridded_forecast_evaluation.zip differ diff --git a/_downloads/a42d97e40eabca8f50460c166c77590c/quadtree_gridded_forecast_evaluation.ipynb b/_downloads/a42d97e40eabca8f50460c166c77590c/quadtree_gridded_forecast_evaluation.ipynb new file mode 100644 index 00000000..6dc7839d --- /dev/null +++ b/_downloads/a42d97e40eabca8f50460c166c77590c/quadtree_gridded_forecast_evaluation.ipynb @@ -0,0 +1,194 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n\n# Quadtree Grid-based Forecast Evaluation\n\nThis example demonstrates how to create a quadtree based single resolution-grid and multi-resolution grid.\nMulti-resolution grid is created using earthquake catalog, in which seismic density determines the size of a grid cell. \nIn creating a multi-resolution grid we select a threshold ($N_{max}$) as a maximum number of earthquake in each cell.\nIn single-resolution grid, we simply select a zoom-level (L) that determines a single resolution grid.\nThe number of cells in single-resolution grid are equal to $4^L$. The zoom-level L=11 leads to 4.2 million cells, nearest to 0.1x0.1 grid.\n\nWe use these grids to create and evaluate a time-independent forecast. Grid-based\nforecasts assume the variability of the forecasts is Poissonian. Therefore, poisson-based evaluations\nshould be used to evaluate grid-based forecasts defined using quadtree regions.\n\nOverview:\n 1. Define spatial grids\n - Multi-resolution grid\n - Single-resolution grid\n 2. Load forecasts\n - Multi-resolution forecast\n - Single-resolution forecast\n 3. Load evaluation catalog\n 4. Apply Poissonian evaluations for both grid-based forecasts\n 5. Visualize evaluation results\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load required libraries\n\nMost of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the\n:mod:`csep.utils` subpackage.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "import numpy\nimport pandas\nfrom csep.core import poisson_evaluations as poisson\nfrom csep.utils import time_utils, plots\nfrom csep.core.regions import QuadtreeGrid2D\nfrom csep.core.forecasts import GriddedForecast\nfrom csep.utils.time_utils import decimal_year_to_utc_epoch\nfrom csep.core.catalogs import CSEPCatalog" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load Training Catalog for Multi-resolution grid\n\nWe define a multi-resolution quadtree using earthquake catalog. We load a training catalog in CSEP and use that catalog to create a multi-resolution grid.\nSometimes, we do not the catalog in exact format as requried by pyCSEP. So we can read a catalog using Pandas and convert it\ninto the format accepable by PyCSEP. Then we instantiate an object of class CSEPCatalog by calling function :func:`csep.core.regions.CSEPCatalog.from_dataframe`\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "dfcat = pandas.read_csv('cat_train_2013.csv')\ncolumn_name_mapper = {\n 'lon': 'longitude',\n 'lat': 'latitude',\n 'mag': 'magnitude',\n 'index': 'id'\n }\n\n# maps the column names to the dtype expected by the catalog class\ndfcat = dfcat.reset_index().rename(columns=column_name_mapper)\n\n# create the origin_times from decimal years\ndfcat['origin_time'] = dfcat.apply(lambda row: decimal_year_to_utc_epoch(row.year), axis=1)\n\n# create catalog from dataframe\ncatalog_train = CSEPCatalog.from_dataframe(dfcat)\nprint(catalog_train)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Define Multi-resolution Gridded Region\nNow use define a threshold for maximum number of earthquake allowed per cell, i.e. Nmax\nand call :func:`csep.core.regions.QuadtreeGrid_from_catalog` to create a multi-resolution grid.\nFor simplicity we assume only single magnitude bin, i.e. all the earthquakes greater than and equal to 5.95\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "mbins = numpy.array([5.95])\nNmax = 25\nr_multi = QuadtreeGrid2D.from_catalog(catalog_train, Nmax, magnitudes=mbins)\nprint('Number of cells in Multi-resolution grid :', r_multi.num_nodes)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Define Single-resolution Gridded Region\n\nHere as an example we define a single resolution grid at zoom-level L=6. For this purpose \nwe call :func:`csep.core.regions.QuadtreeGrid2D_from_single_resolution` to create a single resolution grid.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# For simplicity of example, we assume only single magnitude bin, \n# i.e. all the earthquakes greater than and equal to 5.95\n\nmbins = numpy.array([5.95])\nr_single = QuadtreeGrid2D.from_single_resolution(6, magnitudes=mbins)\nprint('Number of cells in Single-Resolution grid :', r_single.num_nodes)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load forecast of multi-resolution grid\nAn example time-independent forecast had been created for this grid and provided the example forecast data set along with the main repository.\nWe load the time-independent global forecast which has time horizon of 1 year. \nThe filepath is relative to the root directory of the package. You can specify any file location for your forecasts.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "forecast_data = numpy.loadtxt('example_rate_zoom=EQ10L11.csv')\n#Reshape forecast as Nx1 array\nforecast_data = forecast_data.reshape(-1,1)\n\nforecast_multi_grid = GriddedForecast(data = forecast_data, region = r_multi, magnitudes = mbins, name = 'Example Multi-res Forecast')\n\n#The loaded forecast is for 1 year. The test catalog we will use to evaluate is for 6 years. So we can rescale the forecast.\nprint(f\"expected event count before scaling: {forecast_multi_grid.event_count}\")\nforecast_multi_grid.scale(6)\nprint(f\"expected event count after scaling: {forecast_multi_grid.event_count}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load forecast of single-resolution grid\nWe have already created a time-independent global forecast with time horizon of 1 year and provided with the reporsitory. \nThe filepath is relative to the root directory of the package. You can specify any file location for your forecasts.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "forecast_data = numpy.loadtxt('example_rate_zoom=6.csv')\n#Reshape forecast as Nx1 array\nforecast_data = forecast_data.reshape(-1,1)\n\nforecast_single_grid = GriddedForecast(data = forecast_data, region = r_single, \n magnitudes = mbins, name = 'Example Single-res Forecast')\n\n# The loaded forecast is for 1 year. The test catalog we will use is for 6 years. So we can rescale the forecast.\nprint(f\"expected event count before scaling: {forecast_single_grid.event_count}\")\nforecast_single_grid.scale(6)\nprint(f\"expected event count after scaling: {forecast_single_grid.event_count}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load evaluation catalog\n\nWe have a test catalog stored here. We can read the test catalog as a pandas frame and convert it into a format that is acceptable to PyCSEP\nThen we instantiate an object of catalog\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "dfcat = pandas.read_csv('cat_test.csv')\n\ncolumn_name_mapper = {\n 'lon': 'longitude',\n 'lat': 'latitude',\n 'mag': 'magnitude'\n }\n\n# maps the column names to the dtype expected by the catalog class\ndfcat = dfcat.reset_index().rename(columns=column_name_mapper)\n# create the origin_times from decimal years\ndfcat['origin_time'] = dfcat.apply(lambda row: decimal_year_to_utc_epoch(row.year), axis=1)\n\n# create catalog from dataframe\ncatalog = CSEPCatalog.from_dataframe(dfcat)\nprint(catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compute Poisson spatial test and Number test\n\nSimply call the :func:`csep.core.poisson_evaluations.spatial_test` and :func:`csep.core.poisson_evaluations.number_test` functions to evaluate the forecast using the specified\nevaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose\noption prints the status of the simulations to the standard output.\n\nNote: But before we use evaluation catalog, we need to link gridded region with observed catalog.\nSince we have two different grids here, so we do it separately for both grids.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "#For Multi-resolution grid, linking region to catalog.\ncatalog.region = forecast_multi_grid.region\nspatial_test_multi_res_result = poisson.spatial_test(forecast_multi_grid, catalog)\nnumber_test_multi_res_result = poisson.number_test(forecast_multi_grid, catalog)\n\n\n#For Single-resolution grid, linking region to catalog.\ncatalog.region = forecast_single_grid.region\nspatial_test_single_res_result = poisson.spatial_test(forecast_single_grid, catalog)\nnumber_test_single_res_result = poisson.number_test(forecast_single_grid, catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Plot spatial test results\n\nWe provide the function :func:`csep.utils.plotting.plot_poisson_consistency_test` to visualize the evaluation results from\nconsistency tests.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "stest_result = [spatial_test_single_res_result, spatial_test_multi_res_result]\nax_spatial = plots.plot_poisson_consistency_test(stest_result,\n plot_args={'xlabel': 'Spatial likelihood'})\n\nntest_result = [number_test_single_res_result, number_test_multi_res_result]\nax_number = plots.plot_poisson_consistency_test(ntest_result,\n plot_args={'xlabel': 'Number of Earthquakes'})" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.20" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file diff --git a/_downloads/be56bf149ff484d5a0c60e35d52249c7/plot_gridded_forecast.py b/_downloads/be56bf149ff484d5a0c60e35d52249c7/plot_gridded_forecast.py new file mode 100644 index 00000000..84bdd46e --- /dev/null +++ b/_downloads/be56bf149ff484d5a0c60e35d52249c7/plot_gridded_forecast.py @@ -0,0 +1,49 @@ +""" +Plotting gridded forecast +========================= + +This example show you how to load a gridded forecast stored in the default ASCII format. +""" + +#################################################################################################################################### +# Load required libraries +# ----------------------- +# +# Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +# :mod:`csep.utils` subpackage. + +import csep +from csep.utils import datasets, time_utils + +#################################################################################################################################### +# Define forecast properties +# -------------------------- +# +# We choose a :ref:`time-independent-forecast` to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note, +# the start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts +# because they can be rescale to any arbitrary time period. + +start_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0') +end_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0') + +#################################################################################################################################### +# Load forecast +# ------------- +# +# For this example, we provide the example forecast data set along with the main repository. The filepath is relative +# to the root directory of the package. You can specify any file location for your forecasts. + +forecast = csep.load_gridded_forecast(datasets.helmstetter_mainshock_fname, + start_date=start_date, + end_date=end_date, + name='helmstetter_mainshock') + +#################################################################################################################################### +# Plot forecast +# ------------- +# +# The forecast object provides :meth:`csep.core.forecasts.GriddedForecast.plot` to plot a gridded forecast. This function +# returns a matplotlib axes, so more specific attributes can be set on the figure. + +ax = forecast.plot(show=True) + diff --git a/_downloads/c6ff302844ac07717a84220051b88c65/working_with_catalog_forecasts.ipynb b/_downloads/c6ff302844ac07717a84220051b88c65/working_with_catalog_forecasts.ipynb new file mode 100644 index 00000000..5a4182ea --- /dev/null +++ b/_downloads/c6ff302844ac07717a84220051b88c65/working_with_catalog_forecasts.ipynb @@ -0,0 +1,147 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n# Working with catalog-based forecasts\n\nThis example shows some basic interactions with data-based forecasts. We will load in a forecast stored in the CSEP\ndata format, and compute the expected rates on a 0.1\u00b0 x 0.1\u00b0 grid covering the state of California. We will plot the\nexpected rates in the spatial cells.\n\nOverview:\n 1. Define forecast properties (time horizon, spatial region, etc).\n 2. Compute the expected rates in space and magnitude bins\n 3. Plot expected rates in the spatial cells\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load required libraries\n\nMost of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the\n:mod:`csep.utils` subpackage.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "import numpy\n\nimport csep\nfrom csep.core import regions\nfrom csep.utils import datasets" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load data forecast\n\nPyCSEP contains some basic forecasts that can be used to test of the functionality of the package. This forecast has already \nbeen filtered to the California RELM region.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "forecast = csep.load_catalog_forecast(datasets.ucerf3_ascii_format_landers_fname)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Define spatial and magnitude regions\n\nBefore we can compute the bin-wise rates we need to define a spatial region and a set of magnitude bin edges. The magnitude\nbin edges # are the lower bound (inclusive) except for the last bin, which is treated as extending to infinity. We can\nbind these # to the forecast object. This can also be done by passing them as keyword arguments\ninto :func:`csep.load_catalog_forecast`.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# Magnitude bins properties\nmin_mw = 4.95\nmax_mw = 8.95\ndmw = 0.1\n\n# Create space and magnitude regions\nmagnitudes = regions.magnitude_bins(min_mw, max_mw, dmw)\nregion = regions.california_relm_region()\n\n# Bind region information to the forecast (this will be used for binning of the catalogs)\nforecast.region = regions.create_space_magnitude_region(region, magnitudes)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compute spatial event counts\n\nThe :class:`csep.core.forecasts.CatalogForecast` provides a method to compute the expected number of events in spatial cells. This \nrequires a region with magnitude information. \n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "_ = forecast.get_expected_rates(verbose=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Plot expected event counts\n\nWe can plot the expected event counts the same way that we plot a :class:`csep.core.forecasts.GriddedForecast`\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "ax = forecast.expected_rates.plot(plot_args={'clim': [-3.5, 0]}, show=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The images holes in the image are due to under-sampling from the forecast.\n\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Quick sanity check\n\nThe forecasts were filtered to the spatial region so all events should be binned. We loop through each data in the forecast and\ncount the number of events and compare that with the expected rates. The expected rate is an average in each space-magnitude bin, so\nwe have to multiply this value by the number of catalogs in the forecast.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "total_events = 0\nfor catalog in forecast:\n total_events += catalog.event_count\nnumpy.testing.assert_allclose(total_events, forecast.expected_rates.sum() * forecast.n_cat)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.20" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file diff --git a/_downloads/cdfcae4b5aede20f8604490d6c58c61f/catalog_forecast_evaluation.zip b/_downloads/cdfcae4b5aede20f8604490d6c58c61f/catalog_forecast_evaluation.zip new file mode 100644 index 00000000..f94f28be Binary files /dev/null and b/_downloads/cdfcae4b5aede20f8604490d6c58c61f/catalog_forecast_evaluation.zip differ diff --git a/_downloads/d743cbe1f6b1be2a6da2846a73e69cc2/catalog_filtering.zip b/_downloads/d743cbe1f6b1be2a6da2846a73e69cc2/catalog_filtering.zip new file mode 100644 index 00000000..dc123ea9 Binary files /dev/null and b/_downloads/d743cbe1f6b1be2a6da2846a73e69cc2/catalog_filtering.zip differ diff --git a/_downloads/e8378d016bc608e38760324968f382b0/plot_customizations.zip b/_downloads/e8378d016bc608e38760324968f382b0/plot_customizations.zip new file mode 100644 index 00000000..e15edeba Binary files /dev/null and b/_downloads/e8378d016bc608e38760324968f382b0/plot_customizations.zip differ diff --git a/_images/output_13_0.png b/_images/output_13_0.png new file mode 100644 index 00000000..d24cee8a Binary files /dev/null and b/_images/output_13_0.png differ diff --git a/_images/output_16_0.png b/_images/output_16_0.png new file mode 100644 index 00000000..ed5e5182 Binary files /dev/null and b/_images/output_16_0.png differ diff --git a/_images/output_19_0.png b/_images/output_19_0.png new file mode 100644 index 00000000..701b300c Binary files /dev/null and b/_images/output_19_0.png differ diff --git a/_images/output_22_0.png b/_images/output_22_0.png new file mode 100644 index 00000000..0b2ac5fb Binary files /dev/null and b/_images/output_22_0.png differ diff --git a/_images/output_27_0.png b/_images/output_27_0.png new file mode 100644 index 00000000..02ef235b Binary files /dev/null and b/_images/output_27_0.png differ diff --git a/_images/output_30_0.png b/_images/output_30_0.png new file mode 100644 index 00000000..6e1a5f8d Binary files /dev/null and b/_images/output_30_0.png differ diff --git a/_images/output_33_0.png b/_images/output_33_0.png new file mode 100644 index 00000000..39228816 Binary files /dev/null and b/_images/output_33_0.png differ diff --git a/_images/output_36_0.png b/_images/output_36_0.png new file mode 100644 index 00000000..678bb654 Binary files /dev/null and b/_images/output_36_0.png differ diff --git a/_images/output_6_0.png b/_images/output_6_0.png new file mode 100644 index 00000000..4e6ec6db Binary files /dev/null and b/_images/output_6_0.png differ diff --git a/_images/output_9_0.png b/_images/output_9_0.png new file mode 100644 index 00000000..6eee5a50 Binary files /dev/null and b/_images/output_9_0.png differ diff --git a/_images/sphx_glr_catalog_filtering_thumb.png b/_images/sphx_glr_catalog_filtering_thumb.png new file mode 100644 index 00000000..fb0d6b4c Binary files /dev/null and b/_images/sphx_glr_catalog_filtering_thumb.png differ diff --git a/_images/sphx_glr_catalog_forecast_evaluation_001.png b/_images/sphx_glr_catalog_forecast_evaluation_001.png new file mode 100644 index 00000000..0cbc9135 Binary files /dev/null and b/_images/sphx_glr_catalog_forecast_evaluation_001.png differ diff --git a/_images/sphx_glr_catalog_forecast_evaluation_002.png b/_images/sphx_glr_catalog_forecast_evaluation_002.png new file mode 100644 index 00000000..ac028e89 Binary files /dev/null and b/_images/sphx_glr_catalog_forecast_evaluation_002.png differ diff --git a/_images/sphx_glr_catalog_forecast_evaluation_thumb.png b/_images/sphx_glr_catalog_forecast_evaluation_thumb.png new file mode 100644 index 00000000..da4b7832 Binary files /dev/null and b/_images/sphx_glr_catalog_forecast_evaluation_thumb.png differ diff --git a/_images/sphx_glr_gridded_forecast_evaluation_001.png b/_images/sphx_glr_gridded_forecast_evaluation_001.png new file mode 100644 index 00000000..e7253faf Binary files /dev/null and b/_images/sphx_glr_gridded_forecast_evaluation_001.png differ diff --git a/_images/sphx_glr_gridded_forecast_evaluation_002.png b/_images/sphx_glr_gridded_forecast_evaluation_002.png new file mode 100644 index 00000000..f08eadcf Binary files /dev/null and b/_images/sphx_glr_gridded_forecast_evaluation_002.png differ diff --git a/_images/sphx_glr_gridded_forecast_evaluation_003.png b/_images/sphx_glr_gridded_forecast_evaluation_003.png new file mode 100644 index 00000000..ca648dc0 Binary files /dev/null and b/_images/sphx_glr_gridded_forecast_evaluation_003.png differ diff --git a/_images/sphx_glr_gridded_forecast_evaluation_004.png b/_images/sphx_glr_gridded_forecast_evaluation_004.png new file mode 100644 index 00000000..bce07ac0 Binary files /dev/null and b/_images/sphx_glr_gridded_forecast_evaluation_004.png differ diff --git a/_images/sphx_glr_gridded_forecast_evaluation_thumb.png b/_images/sphx_glr_gridded_forecast_evaluation_thumb.png new file mode 100644 index 00000000..ed9d7986 Binary files /dev/null and b/_images/sphx_glr_gridded_forecast_evaluation_thumb.png differ diff --git a/_images/sphx_glr_plot_customizations_001.png b/_images/sphx_glr_plot_customizations_001.png new file mode 100644 index 00000000..20dbfa4f Binary files /dev/null and b/_images/sphx_glr_plot_customizations_001.png differ diff --git a/_images/sphx_glr_plot_customizations_002.png b/_images/sphx_glr_plot_customizations_002.png new file mode 100644 index 00000000..e4b3fcbd Binary files /dev/null and b/_images/sphx_glr_plot_customizations_002.png differ diff --git a/_images/sphx_glr_plot_customizations_003.png b/_images/sphx_glr_plot_customizations_003.png new file mode 100644 index 00000000..e3bebd68 Binary files /dev/null and b/_images/sphx_glr_plot_customizations_003.png differ diff --git a/_images/sphx_glr_plot_customizations_004.png b/_images/sphx_glr_plot_customizations_004.png new file mode 100644 index 00000000..9b0ee764 Binary files /dev/null and b/_images/sphx_glr_plot_customizations_004.png differ diff --git a/_images/sphx_glr_plot_customizations_thumb.png b/_images/sphx_glr_plot_customizations_thumb.png new file mode 100644 index 00000000..b69348ec Binary files /dev/null and b/_images/sphx_glr_plot_customizations_thumb.png differ diff --git a/_images/sphx_glr_plot_gridded_forecast_001.png b/_images/sphx_glr_plot_gridded_forecast_001.png new file mode 100644 index 00000000..62bd38cd Binary files /dev/null and b/_images/sphx_glr_plot_gridded_forecast_001.png differ diff --git a/_images/sphx_glr_plot_gridded_forecast_thumb.png b/_images/sphx_glr_plot_gridded_forecast_thumb.png new file mode 100644 index 00000000..10678f0e Binary files /dev/null and b/_images/sphx_glr_plot_gridded_forecast_thumb.png differ diff --git a/_images/sphx_glr_quadtree_gridded_forecast_evaluation_001.png b/_images/sphx_glr_quadtree_gridded_forecast_evaluation_001.png new file mode 100644 index 00000000..768f0078 Binary files /dev/null and b/_images/sphx_glr_quadtree_gridded_forecast_evaluation_001.png differ diff --git a/_images/sphx_glr_quadtree_gridded_forecast_evaluation_002.png b/_images/sphx_glr_quadtree_gridded_forecast_evaluation_002.png new file mode 100644 index 00000000..fa94d866 Binary files /dev/null and b/_images/sphx_glr_quadtree_gridded_forecast_evaluation_002.png differ diff --git a/_images/sphx_glr_quadtree_gridded_forecast_evaluation_thumb.png b/_images/sphx_glr_quadtree_gridded_forecast_evaluation_thumb.png new file mode 100644 index 00000000..58ea01ad Binary files /dev/null and b/_images/sphx_glr_quadtree_gridded_forecast_evaluation_thumb.png differ diff --git a/_images/sphx_glr_working_with_catalog_forecasts_001.png b/_images/sphx_glr_working_with_catalog_forecasts_001.png new file mode 100644 index 00000000..1400a1e6 Binary files /dev/null and b/_images/sphx_glr_working_with_catalog_forecasts_001.png differ diff --git a/_images/sphx_glr_working_with_catalog_forecasts_thumb.png b/_images/sphx_glr_working_with_catalog_forecasts_thumb.png new file mode 100644 index 00000000..32cebfdc Binary files /dev/null and b/_images/sphx_glr_working_with_catalog_forecasts_thumb.png differ diff --git a/_modules/csep.html b/_modules/csep.html new file mode 100644 index 00000000..1046ba2f --- /dev/null +++ b/_modules/csep.html @@ -0,0 +1,712 @@ + + + + + + + + csep — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep

+import json
+import os
+import time
+
+from csep._version import __version__
+
+from csep.core import forecasts
+from csep.core import catalogs
+from csep.core import poisson_evaluations
+from csep.core import catalog_evaluations
+from csep.core import regions
+from csep.core.repositories import (
+    load_json,
+    write_json
+)
+
+from csep.core.exceptions import CSEPCatalogException
+
+from csep.utils import datasets
+from csep.utils import readers
+
+from csep.core.forecasts import GriddedForecast, CatalogForecast
+from csep.models import (
+    EvaluationResult,
+    CatalogNumberTestResult,
+    CatalogSpatialTestResult,
+    CatalogMagnitudeTestResult,
+    CatalogPseudolikelihoodTestResult,
+    CalibrationTestResult
+)
+
+from csep.utils.time_utils import (
+    utc_now_datetime,
+    strptime_to_utc_datetime,
+    datetime_to_utc_epoch,
+    epoch_time_to_utc_datetime,
+    utc_now_epoch
+)
+
+# this defines what is imported on a `from csep import *`
+__all__ = [
+    'load_json',
+    'write_json',
+    'catalogs',
+    'datasets',
+    'regions',
+    'poisson_evaluations',
+    'catalog_evaluations',
+    'forecasts',
+    'load_stochastic_event_sets',
+    'load_catalog',
+    'query_comcat',
+    'load_evaluation_result',
+    'load_gridded_forecast',
+    'load_catalog_forecast',
+    'utc_now_datetime',
+    'strptime_to_utc_datetime',
+    'datetime_to_utc_epoch',
+    'epoch_time_to_utc_datetime',
+    'utc_now_epoch',
+    '__version__'
+]
+
+
+
+[docs] +def load_stochastic_event_sets(filename, type='csv', format='native', + **kwargs): + """ General function to load stochastic event sets + + This function returns a generator to iterate through a collection of catalogs. + To load a forecast and include metadata use :func:`csep.load_catalog_forecast`. + + Args: + filename (str): name of file or directory where stochastic event sets live. + type (str): either 'ucerf3' or 'csep' depending on the type of observed_catalog to load + format (str): ('csep' or 'native') if native catalogs are not converted to csep format. + kwargs (dict): see the documentation of that class corresponding to the type you selected + for the kwargs options + + Returns: + (generator): :class:`~csep.core.catalogs.AbstractBaseCatalog` + + """ + if type not in ('ucerf3', 'csv'): + raise ValueError("type must be one of the following: (ucerf3)") + + # use mapping to dispatch to correct function + # in general, stochastic event sets are loaded with classmethods and single catalogs use the + # constructor + mapping = {'ucerf3': catalogs.UCERF3Catalog.load_catalogs, + 'csv': catalogs.CSEPCatalog.load_ascii_catalogs} + + # dispatch to proper loading function + result = mapping[type](filename, **kwargs) + + # factory function to load catalogs from different classes + while True: + try: + catalog = next(result) + except StopIteration: + return + except Exception: + raise + if format == 'native': + yield catalog + elif format == 'csep': + yield catalog.get_csep_format() + else: + raise ValueError('format must be either "native" or "csep!')
+ + + +
+[docs] +def load_catalog(filename, type='csep-csv', format='native', loader=None, + apply_filters=False, **kwargs): + """ General function to load single catalog + + See corresponding class documentation for additional parameters. + + Args: + type (str): ('ucerf3', 'csep-csv', 'zmap', 'jma-csv', 'ndk') default is 'csep-csv' + format (str): ('native', 'csep') determines whether the catalog should be converted into the csep + formatted catalog or kept as native. + apply_filters (bool): if true, will apply filters and spatial filter to catalog. time-varying magnitude completeness + will still need to be applied. filters kwarg should be included. see catalog + documentation for more details. + + Returns (:class:`~csep.core.catalogs.AbstractBaseCatalog`) + """ + + if type not in ( + 'ucerf3', 'csep-csv', 'zmap', 'jma-csv', 'ingv_horus', + 'ingv_emrcmt', + 'ndk') and loader is None: + raise ValueError( + "type must be one of the following: ('ucerf3', 'csep-csv', 'zmap', 'jma-csv', 'ndk', 'ingv_horus', 'ingv_emrcmt').") + + # map to correct catalog class, at some point these could be abstracted into configuration file + # this maps a human readable string to the correct catalog class and the correct loader function + class_loader_mapping = { + 'ucerf3': { + 'class': catalogs.UCERF3Catalog, + 'loader': None + }, + 'csep-csv': { + 'class': catalogs.CSEPCatalog, + 'loader': readers.csep_ascii + }, + 'zmap': { + 'class': catalogs.CSEPCatalog, + 'loader': readers.zmap_ascii + }, + 'jma-csv': { + 'class': catalogs.CSEPCatalog, + 'loader': readers.jma_csv, + }, + 'ndk': { + 'class': catalogs.CSEPCatalog, + 'loader': readers.ndk + }, + 'ingv_horus': { + 'class': catalogs.CSEPCatalog, + 'loader': readers.ingv_horus + }, + 'ingv_emrcmt': { + 'class': catalogs.CSEPCatalog, + 'loader': readers.ingv_emrcmt + } + } + + # treat json files using the from_dict() member instead of constructor + catalog_class = class_loader_mapping[type]['class'] + if os.path.splitext(filename)[-1][1:] == 'json': + catalog = catalog_class.load_json(filename, **kwargs) + else: + if loader is None: + loader = class_loader_mapping[type]['loader'] + + catalog = catalog_class.load_catalog(filename=filename, loader=loader, + **kwargs) + + # convert to csep format if needed + if format == 'native': + return_val = catalog + elif format == 'csep': + return_val = catalog.get_csep_format() + else: + raise ValueError('format must be either "native" or "csep"') + + if apply_filters: + try: + return_val = return_val.filter().filter_spatial() + except CSEPCatalogException: + return_val = return_val.filter() + + return return_val
+ + + +
+[docs] +def query_comcat(start_time, end_time, min_magnitude=2.50, + min_latitude=31.50, max_latitude=43.00, + min_longitude=-125.40, max_longitude=-113.10, + max_depth=1000, + verbose=True, + apply_filters=False, **kwargs): + """ + Access Comcat catalog through web service + + Args: + start_time: datetime object of start of catalog + end_time: datetime object for end of catalog + min_magnitude: minimum magnitude to query + min_latitude: maximum magnitude to query + max_latitude: max latitude of bounding box + min_longitude: min latitude of bounding box + max_longitude: max longitude of bounding box + max_depth: maximum depth of the bounding box + verbose (bool): print catalog summary statistics + + Returns: + :class:`csep.core.catalogs.CSEPCatalog + """ + + # Timezone should be in UTC + t0 = time.time() + eventlist = readers._query_comcat(start_time=start_time, end_time=end_time, + min_magnitude=min_magnitude, + min_latitude=min_latitude, + max_latitude=max_latitude, + min_longitude=min_longitude, + max_longitude=max_longitude, + max_depth=max_depth) + t1 = time.time() + comcat = catalogs.CSEPCatalog(data=eventlist, + date_accessed=utc_now_datetime(), **kwargs) + print("Fetched ComCat catalog in {} seconds.\n".format(t1 - t0)) + + if apply_filters: + try: + comcat = comcat.filter().filter_spatial() + except CSEPCatalogException: + comcat = comcat.filter() + + if verbose: + print("Downloaded catalog from ComCat with following parameters") + print("Start Date: {}\nEnd Date: {}".format(str(comcat.start_time), + str(comcat.end_time))) + print( + "Min Latitude: {} and Max Latitude: {}".format(comcat.min_latitude, + comcat.max_latitude)) + print("Min Longitude: {} and Max Longitude: {}".format( + comcat.min_longitude, comcat.max_longitude)) + print("Min Magnitude: {}".format(comcat.min_magnitude)) + print(f"Found {comcat.event_count} events in the ComCat catalog.") + + return comcat
+ + + +
+[docs] +def query_bsi(start_time, end_time, min_magnitude=2.50, + min_latitude=32.0, max_latitude=50.0, + min_longitude=2.0, max_longitude=21.0, + max_depth=1000, + verbose=True, + apply_filters=False, **kwargs): + """ + Access BSI catalog through web service + + Args: + start_time: datetime object of start of catalog + end_time: datetime object for end of catalog + min_magnitude: minimum magnitude to query + min_latitude: maximum magnitude to query + max_latitude: max latitude of bounding box + min_longitude: min latitude of bounding box + max_longitude: max longitude of bounding box + max_depth: maximum depth of the bounding box + verbose (bool): print catalog summary statistics + + Returns: + :class:`csep.core.catalogs.CSEPCatalog + """ + + # Timezone should be in UTC + t0 = time.time() + eventlist = readers._query_bsi(start_time=start_time, end_time=end_time, + min_magnitude=min_magnitude, + min_latitude=min_latitude, + max_latitude=max_latitude, + min_longitude=min_longitude, + max_longitude=max_longitude, + max_depth=max_depth) + t1 = time.time() + bsi = catalogs.CSEPCatalog(data=eventlist, + date_accessed=utc_now_datetime(), **kwargs) + print("Fetched BSI catalog in {} seconds.\n".format(t1 - t0)) + + if apply_filters: + try: + bsi = bsi.filter().filter_spatial() + except CSEPCatalogException: + bsi = bsi.filter() + + if verbose: + print( + "Downloaded catalog from Bollettino Sismico Italiano (BSI) with following parameters") + print("Start Date: {}\nEnd Date: {}".format(str(bsi.start_time), + str(bsi.end_time))) + print("Min Latitude: {} and Max Latitude: {}".format(bsi.min_latitude, + bsi.max_latitude)) + print( + "Min Longitude: {} and Max Longitude: {}".format(bsi.min_longitude, + bsi.max_longitude)) + print("Min Magnitude: {}".format(bsi.min_magnitude)) + print(f"Found {bsi.event_count} events in the BSI catalog.") + + return bsi
+ + + +def query_gns(start_time, end_time, min_magnitude=2.950, + min_latitude=-47, max_latitude=-34, + min_longitude=164, max_longitude=180, + max_depth=45.5, + verbose=True, + apply_filters=False, **kwargs): + """ + Access GNS Science catalog through web service + + Args: + start_time: datetime object of start of catalog + end_time: datetime object for end of catalog + min_magnitude: minimum magnitude to query + min_latitude: maximum magnitude to query + max_latitude: max latitude of bounding box + min_longitude: min latitude of bounding box + max_longitude: max longitude of bounding box + max_depth: maximum depth of the bounding box + verbose (bool): print catalog summary statistics + + Returns: + :class:`csep.core.catalogs.CSEPCatalog + """ + + # Timezone should be in UTC + t0 = time.time() + eventlist = readers._query_gns(start_time=start_time, end_time=end_time, + min_magnitude=min_magnitude, + min_latitude=min_latitude, max_latitude=max_latitude, + min_longitude=min_longitude, max_longitude=max_longitude, + max_depth=max_depth) + t1 = time.time() + gns = catalogs.CSEPCatalog(data=eventlist, date_accessed=utc_now_datetime()) + if apply_filters: + try: + gns = gns.filter().filter_spatial() + except CSEPCatalogException: + gns = gns.filter() + + if verbose: + print("Downloaded catalog from GNS Science NZ (GNS) with following parameters") + print("Start Date: {}\nEnd Date: {}".format(str(gns.start_time), str(gns.end_time))) + print("Min Latitude: {} and Max Latitude: {}".format(gns.min_latitude, gns.max_latitude)) + print("Min Longitude: {} and Max Longitude: {}".format(gns.min_longitude, gns.max_longitude)) + print("Min Magnitude: {}".format(gns.min_magnitude)) + print(f"Found {gns.event_count} events in the gns catalog.") + + return gns + + +def query_gcmt(start_time, end_time, min_magnitude=5.0, + max_depth=None, + catalog_id=None, + min_latitude=None, max_latitude=None, + min_longitude=None, max_longitude=None): + + eventlist = readers._query_gcmt(start_time=start_time, + end_time=end_time, + min_magnitude=min_magnitude, + min_latitude=min_latitude, + max_latitude=max_latitude, + min_longitude=min_longitude, + max_longitude=max_longitude, + max_depth=max_depth) + + catalog = catalogs.CSEPCatalog(data=eventlist, + name='gCMT', + catalog_id=catalog_id, + date_accessed=utc_now_datetime()) + return catalog + + +def load_evaluation_result(fname): + """ Load evaluation result stored as json file + + Returns: + :class:`csep.core.evaluations.EvaluationResult` + + """ + # tries to return the correct class for the evaluation result. if it cannot find the type simply returns the basic result. + evaluation_result_factory = { + 'default': EvaluationResult, + 'EvaluationResult': EvaluationResult, + 'CatalogNumberTestResult': CatalogNumberTestResult, + 'CatalogSpatialTestResult': CatalogSpatialTestResult, + 'CatalogMagnitudeTestResult': CatalogMagnitudeTestResult, + 'CatalogPseudoLikelihoodTestResult': CatalogPseudolikelihoodTestResult, + 'CalibrationTestResult': CalibrationTestResult + } + with open(fname, 'r') as json_file: + json_dict = json.load(json_file) + try: + evaluation_type = json_dict['type'] + except: + evaluation_type = 'default' + eval_result = evaluation_result_factory[evaluation_type].from_dict( + json_dict) + return eval_result + + +
+[docs] +def load_gridded_forecast(fname, loader=None, **kwargs): + """ Loads grid based forecast from hard-disk. + + The function loads the forecast provided with at the filepath defined by fname. The function attempts to understand + the file format based on the extension of the filepath. Optionally, if loader function is provided, that function + will be used to load the forecast. The loader function should return a :class:`csep.core.forecasts.GriddedForecast` + class with the region and magnitude members correctly assigned. + + File extensions: + .dat -> CSEP ascii format + .xml -> CSEP xml format (not yet implemented) + .h5 -> CSEP hdf5 format (not yet implemented) + .bin -> CSEP binary format (not yet implemented) + + Args: + fname (str): path of grid based forecast + loader (func): function to load forecast in bespoke format needs to return :class:`csep.core.forecasts.GriddedForecast` + and first argument should be required and the filename of the forecast to load + called as loader(func, **kwargs). + + **kwargs: passed into loader function + + Throws: + FileNotFoundError: when the file extension is not known and a loader is not provided. + AttributeError: if loader is provided and is not callable. + + Returns: + :class:`csep.core.forecasts.GriddedForecast` + """ + # mapping from file extension to loader function, new formats by default they need to be added here + forecast_loader_mapping = { + 'dat': GriddedForecast.load_ascii, + 'xml': None, + 'h5': None, + 'bin': None + } + + # sanity checks + if not os.path.exists(fname): + raise FileNotFoundError( + f"Could not locate file {fname}. Unable to load forecast.") + # sanity checks + if loader is not None and not callable(loader): + raise AttributeError( + "Loader must be callable. Unable to load forecast.") + extension = os.path.splitext(fname)[-1][1:] + if extension not in forecast_loader_mapping.keys() and loader is None: + raise AttributeError( + "File extension should be in ('dat','xml','h5','bin') if loader not provided.") + + if extension in ('xml', 'h5', 'bin'): + raise NotImplementedError + + # assign default loader + if loader is None: + loader = forecast_loader_mapping[extension] + forecast = loader(fname, **kwargs) + # final sanity check + if not isinstance(forecast, GriddedForecast): + raise ValueError("Forecast not instance of GriddedForecast") + return forecast
+ + + +
+[docs] +def load_catalog_forecast(fname, catalog_loader=None, format='native', + type='ascii', **kwargs): + """ General function to handle loading catalog forecasts. + + Currently, just a simple wrapper, but can contain more complex logic in the future. + + Args: + fname (str): pathname to the forecast file or directory containing the forecast files + catalog_loader (func): callable that can load catalogs, see load_stochastic_event_sets above. + format (str): either 'native' or 'csep'. if 'csep', will attempt to be returned into csep catalog format. used to convert between + observed_catalog type. + type (str): either 'ucerf3' or 'csep', determines the catalog format of the forecast. if loader is provided, then + this parameter is ignored. + **kwargs: other keyword arguments passed to the :class:`csep.core.forecasts.CatalogForecast`. + + Returns: + :class:`csep.core.forecasts.CatalogForecast` + """ + # sanity checks + if not os.path.exists(fname): + raise FileNotFoundError( + f"Could not locate file {fname}. Unable to load forecast.") + # sanity checks + if catalog_loader is not None and not callable(catalog_loader): + raise AttributeError( + "Loader must be callable. Unable to load forecast.") + # factory methods for loading different types of catalogs + catalog_loader_mapping = { + 'ascii': catalogs.CSEPCatalog.load_ascii_catalogs, + 'ucerf3': catalogs.UCERF3Catalog.load_catalogs + } + if catalog_loader is None: + catalog_loader = catalog_loader_mapping[type] + # try and parse information from filename and send to forecast constructor + if format == 'native' and type == 'ascii': + try: + basename = str(os.path.basename(fname.rstrip('/')).split('.')[0]) + split_fname = basename.split('_') + name = split_fname[0] + start_time = strptime_to_utc_datetime(split_fname[1], + format="%Y-%m-%dT%H-%M-%S-%f") + # update kwargs + _ = kwargs.setdefault('name', name) + _ = kwargs.setdefault('start_time', start_time) + except: + pass + # create observed_catalog forecast + return CatalogForecast(filename=fname, loader=catalog_loader, + catalog_format=format, catalog_type=type, **kwargs)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/csep/core/catalog_evaluations.html b/_modules/csep/core/catalog_evaluations.html new file mode 100644 index 00000000..92e8702e --- /dev/null +++ b/_modules/csep/core/catalog_evaluations.html @@ -0,0 +1,531 @@ + + + + + + + + csep.core.catalog_evaluations — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep.core.catalog_evaluations

+# Third-Party Imports
+import time
+
+import numpy
+import scipy.stats
+
+# PyCSEP imports
+from csep.core.exceptions import CSEPEvaluationException
+from csep.models import (
+    CatalogNumberTestResult,
+    CatalogSpatialTestResult,
+    CatalogMagnitudeTestResult,
+    CatalogPseudolikelihoodTestResult,
+    CalibrationTestResult
+)
+from csep.utils.calc import _compute_likelihood
+from csep.utils.stats import get_quantiles, cumulative_square_diff
+
+
+
+[docs] +def number_test(forecast, observed_catalog, verbose=True): + """ Performs the number test on a catalog-based forecast. + + The number test builds an empirical distribution of the event counts for each data. By default, this + function does not perform any filtering on the catalogs in the forecast or observation. These should be handled + outside of the function. + + Args: + forecast (:class:`csep.core.forecasts.CatalogForecast`): forecast to evaluate + observed_catalog (:class:`csep.core.catalogs.AbstractBaseCatalog`): evaluation data + + Returns: + evaluation result (:class:`csep.models.EvaluationResult`): evaluation result + """ + event_counts = [] + t0 = time.time() + for i, catalog in enumerate(forecast): + # output status + if verbose: + tens_exp = numpy.floor(numpy.log10(i + 1)) + if (i + 1) % 10 ** tens_exp == 0: + t1 = time.time() + print(f'Processed {i + 1} catalogs in {t1 - t0} seconds', flush=True) + event_counts.append(catalog.event_count) + obs_count = observed_catalog.event_count + delta_1, delta_2 = get_quantiles(event_counts, obs_count) + # prepare result + result = CatalogNumberTestResult(test_distribution=event_counts, + name='Catalog N-Test', + observed_statistic=obs_count, + quantile=(delta_1, delta_2), + status='normal', + obs_catalog_repr=str(observed_catalog), + sim_name=forecast.name, + min_mw=forecast.min_magnitude, + obs_name=observed_catalog.name) + return result
+ + +
+[docs] +def spatial_test(forecast, observed_catalog, verbose=True): + """ Performs spatial test for catalog-based forecasts. + + + + Args: + forecast: CatalogForecast + observed_catalog: CSEPCatalog filtered to be consistent with the forecast + + Returns: + CatalogSpatialTestResult + """ + + if forecast.region is None: + raise CSEPEvaluationException("Forecast must have region member to perform spatial test.") + + # get observed likelihood + if observed_catalog.event_count == 0: + print(f'Spatial test not-invalid because no events in observed catalog.') + + test_distribution = [] + + # compute expected rates for forecast if needed + if forecast.expected_rates is None: + forecast.get_expected_rates(verbose=verbose) + + expected_cond_count = forecast.expected_rates.sum() + forecast_mean_spatial_rates = forecast.expected_rates.spatial_counts() + + # summing over spatial counts ensures that the correct number of events are used; even through the catalogs should + # be filtered before calling this function + gridded_obs = observed_catalog.spatial_counts() + n_obs = numpy.sum(gridded_obs) + + # iterate through catalogs in forecast and compute likelihood + t0 = time.time() + for i, catalog in enumerate(forecast): + gridded_cat = catalog.spatial_counts() + _, lh_norm = _compute_likelihood(gridded_cat, forecast_mean_spatial_rates, expected_cond_count, n_obs) + test_distribution.append(lh_norm) + # output status + if verbose: + tens_exp = numpy.floor(numpy.log10(i + 1)) + if (i + 1) % 10 ** tens_exp == 0: + t1 = time.time() + print(f'Processed {i + 1} catalogs in {t1 - t0} seconds', flush=True) + + _, obs_lh_norm = _compute_likelihood(gridded_obs, forecast_mean_spatial_rates, expected_cond_count, n_obs) + # if obs_lh is -numpy.inf, recompute but only for indexes where obs and simulated are non-zero + message = "normal" + if obs_lh_norm == -numpy.inf: + idx_good_sim = forecast_mean_spatial_rates != 0 + new_gridded_obs = gridded_obs[idx_good_sim] + new_n_obs = numpy.sum(new_gridded_obs) + print(f"Found -inf as the observed likelihood score. " + f"Assuming event(s) occurred in undersampled region of forecast.\n" + f"Recomputing with {new_n_obs} events after removing {n_obs - new_n_obs} events.") + + new_ard = forecast_mean_spatial_rates[idx_good_sim] + _, obs_lh_norm = _compute_likelihood(new_gridded_obs, new_ard, expected_cond_count, n_obs) + message = "undersampled" + + # check for nans here and remove from spatial distribution + test_distribution_spatial_1d = numpy.array(test_distribution) + if numpy.isnan(numpy.sum(test_distribution_spatial_1d)): + test_distribution_spatial_1d = test_distribution_spatial_1d[~numpy.isnan(test_distribution_spatial_1d)] + + if n_obs == 0 or numpy.isnan(obs_lh_norm): + message = "not-valid" + delta_1, delta_2 = -1, -1 + else: + delta_1, delta_2 = get_quantiles(test_distribution_spatial_1d, obs_lh_norm) + + result = CatalogSpatialTestResult(test_distribution=test_distribution_spatial_1d, + name='S-Test', + observed_statistic=obs_lh_norm, + quantile=(delta_1, delta_2), + status=message, + min_mw=forecast.min_magnitude, + obs_catalog_repr=str(observed_catalog), + sim_name=forecast.name, + obs_name=observed_catalog.name) + + return result
+ + +
+[docs] +def magnitude_test(forecast, observed_catalog, verbose=True): + """ Performs magnitude test for catalog-based forecasts """ + test_distribution = [] + + if forecast.region.magnitudes is None: + raise CSEPEvaluationException("Forecast must have region.magnitudes member to perform magnitude test.") + + # short-circuit if zero events + if observed_catalog.event_count == 0: + print("Cannot perform magnitude test when observed event count is zero.") + # prepare result + result = CatalogMagnitudeTestResult(test_distribution=test_distribution, + name='M-Test', + observed_statistic=None, + quantile=(None, None), + status='not-valid', + min_mw=forecast.min_magnitude, + obs_catalog_repr=str(observed_catalog), + obs_name=observed_catalog.name, + sim_name=forecast.name) + + return result + + # compute expected rates for forecast if needed + if forecast.expected_rates is None: + forecast.get_expected_rates(verbose=verbose) + + # returns the average events in the magnitude bins + union_histogram = forecast.expected_rates.magnitude_counts() + n_union_events = numpy.sum(union_histogram) + obs_histogram = observed_catalog.magnitude_counts() + n_obs = numpy.sum(obs_histogram) + union_scale = n_obs / n_union_events + scaled_union_histogram = union_histogram * union_scale + + # compute the test statistic for each catalog + t0 = time.time() + for i, catalog in enumerate(forecast): + mag_counts = catalog.magnitude_counts() + n_events = numpy.sum(mag_counts) + if n_events == 0: + # print("Skipping to next because catalog contained zero events.") + continue + scale = n_obs / n_events + catalog_histogram = mag_counts * scale + # compute magnitude test statistic for the catalog + test_distribution.append( + cumulative_square_diff(numpy.log10(catalog_histogram + 1), numpy.log10(scaled_union_histogram + 1)) + ) + # output status + if verbose: + tens_exp = numpy.floor(numpy.log10(i + 1)) + if (i + 1) % 10 ** tens_exp == 0: + t1 = time.time() + print(f'Processed {i + 1} catalogs in {t1 - t0} seconds', flush=True) + + # compute observed statistic + obs_d_statistic = cumulative_square_diff(numpy.log10(obs_histogram + 1), numpy.log10(scaled_union_histogram + 1)) + + # score evaluation + delta_1, delta_2 = get_quantiles(test_distribution, obs_d_statistic) + + # prepare result + result = CatalogMagnitudeTestResult(test_distribution=test_distribution, + name='M-Test', + observed_statistic=obs_d_statistic, + quantile=(delta_1, delta_2), + status='normal', + min_mw=forecast.min_magnitude, + obs_catalog_repr=str(observed_catalog), + obs_name=observed_catalog.name, + sim_name=forecast.name) + + return result
+ + +
+[docs] +def pseudolikelihood_test(forecast, observed_catalog, verbose=True): + """ Performs the spatial pseudolikelihood test for catalog forecasts. + + Performs the spatial pseudolikelihood test as described by Savran et al., 2020. The tests uses a pseudolikelihood + statistic computed from the expected rates in spatial cells. A pseudolikelihood test based on space-magnitude bins + is in a development mode and does not exist currently. + + Args: + forecast: :class:`csep.core.forecasts.CatalogForecast` + observed_catalog: :class:`csep.core.catalogs.AbstractBaseCatalog` + """ + + if forecast.region is None: + raise CSEPEvaluationException("Forecast must have region member to perform spatial test.") + + # get observed likelihood + if observed_catalog.event_count == 0: + print(f'Skipping pseudolikelihood test because no events in observed catalog.') + return None + + test_distribution = [] + + # compute expected rates for forecast if needed + if forecast.expected_rates is None: + _ = forecast.get_expected_rates(verbose=verbose) + + expected_cond_count = forecast.expected_rates.sum() + forecast_mean_spatial_rates = forecast.expected_rates.spatial_counts() + + # summing over spatial counts ensures that the correct number of events are used; even through the catalogs should + # be filtered before calling this function + gridded_obs = observed_catalog.spatial_counts() + n_obs = numpy.sum(gridded_obs) + + t0 = time.time() + for i, catalog in enumerate(forecast): + gridded_cat = catalog.spatial_counts() + plh, _ = _compute_likelihood(gridded_cat, forecast_mean_spatial_rates, expected_cond_count, n_obs) + test_distribution.append(plh) + # output status + if verbose: + tens_exp = numpy.floor(numpy.log10(i + 1)) + if (i + 1) % 10 ** tens_exp == 0: + t1 = time.time() + print(f'Processed {i + 1} catalogs in {t1 - t0} seconds', flush=True) + + obs_plh, _ = _compute_likelihood(gridded_obs, forecast_mean_spatial_rates, expected_cond_count, n_obs) + # if obs_lh is -numpy.inf, recompute but only for indexes where obs and simulated are non-zero + message = "normal" + if obs_plh == -numpy.inf: + idx_good_sim = forecast_mean_spatial_rates != 0 + new_gridded_obs = gridded_obs[idx_good_sim] + new_n_obs = numpy.sum(new_gridded_obs) + print(f"Found -inf as the observed likelihood score. " + f"Assuming event(s) occurred in undersampled region of forecast.\n" + f"Recomputing with {new_n_obs} events after removing {n_obs - new_n_obs} events.") + if new_n_obs == 0: + print( + f'Skipping pseudo-likelihood based tests for because no events in observed catalog ' + f'after correcting for under-sampling in forecast.' + ) + return None + + new_ard = forecast_mean_spatial_rates[idx_good_sim] + # we need to use the old n_obs here, because if we normalize the ard to a different value the observed + # statistic will not be computed correctly. + obs_plh, _ = _compute_likelihood(new_gridded_obs, new_ard, expected_cond_count, n_obs) + message = "undersampled" + + # check for nans here + test_distribution_1d = numpy.array(test_distribution) + if numpy.isnan(numpy.sum(test_distribution_1d)): + test_distribution_1d = test_distribution_1d[~numpy.isnan(test_distribution_1d)] + + if n_obs == 0 or numpy.isnan(obs_plh): + message = "not-valid" + delta_1, delta_2 = -1, -1 + else: + delta_1, delta_2 = get_quantiles(test_distribution_1d, obs_plh) + + # prepare evaluation result + result = CatalogPseudolikelihoodTestResult( + test_distribution=test_distribution_1d, + name='PL-Test', + observed_statistic=obs_plh, + quantile=(delta_1, delta_2), + status=message, + min_mw=forecast.min_magnitude, + obs_catalog_repr=str(observed_catalog), + sim_name=forecast.name, + obs_name=observed_catalog.name + ) + + return result
+ + +
+[docs] +def calibration_test(evaluation_results, delta_1=False): + """ Perform the calibration test by computing a Kilmogorov-Smirnov test of the observed quantiles against a uniform + distribution. + + Args: + evaluation_results: iterable of evaluation result objects + delta_1 (bool): use delta_1 for quantiles. default false -> use delta_2 quantile score for calibration test + """ + + # this is using "delta_2" which is the cdf value less-equal + idx = 0 if delta_1 else 1 + quantiles = [] + for result in evaluation_results: + if result.status == 'not-valid': + print(f'evaluation not valid for {result.name}. skipping in calibration test.') + else: + quantiles.append(result.quantile[idx]) + + ks, p_value = scipy.stats.kstest(quantiles, 'uniform') + + result = CalibrationTestResult( + test_distribution = quantiles, + name=f'{evaluation_results[0].name} Calibration Test', + observed_statistic=ks, + quantile=p_value, + status='normal', + min_mw = evaluation_results[0].min_mw, + obs_catalog_repr=evaluation_results[0].obs_catalog_repr, + sim_name=evaluation_results[0].sim_name, + obs_name=evaluation_results[0].obs_name + ) + + return result
+ + + +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/csep/core/catalogs.html b/_modules/csep/core/catalogs.html new file mode 100644 index 00000000..95ae9385 --- /dev/null +++ b/_modules/csep/core/catalogs.html @@ -0,0 +1,1433 @@ + + + + + + + + csep.core.catalogs — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep.core.catalogs

+import csv
+import gzip
+import json
+import operator
+import os
+import datetime
+
+# 3rd party required for core package
+import numpy
+import pandas
+
+# CSEP Imports
+from csep.utils.time_utils import epoch_time_to_utc_datetime, datetime_to_utc_epoch, strptime_to_utc_datetime, \
+    millis_to_days, parse_string_format, days_to_millis, strptime_to_utc_epoch, utc_now_datetime, create_utc_datetime
+from csep.utils.stats import min_or_none, max_or_none
+from csep.utils.calc import discretize
+from csep.core.regions import CartesianGrid2D
+import csep.utils.comcat as comcat
+import csep.utils.geonet as geonet
+from csep.core.exceptions import CSEPSchedulerException, CSEPCatalogException, CSEPIOException
+from csep.utils.calc import bin1d_vec
+from csep.utils.constants import CSEP_MW_BINS
+from csep.utils.log import LoggingMixin
+from csep.utils.readers import csep_ascii
+from csep.utils.file import get_file_extension
+from csep.utils.plots import plot_catalog
+
+
+
+[docs] +class AbstractBaseCatalog(LoggingMixin): + """ + Abstract catalog base class for PyCSEP catalogs. This class should not and cannot be used on its own. This just + provides the interface for implementing custom catalog classes. + + """ + + dtype = None + +
+[docs] + def __init__(self, filename=None, data=None, catalog_id=None, format=None, name=None, region=None, + compute_stats=True, filters=None, metadata=None, date_accessed=None): + + """ Standard catalog format for CSEP catalogs. Primary event data are stored in structured numpy array. Additional + metadata are available by the event_id in the catalog metadata information. + + Args: + filename: location of catalog + catalog (numpy.ndarray or eventlist): catalog data + catalog_id: catalog id number (used for stochastic event set forecasts) + format: identification used for serialization + name: human readable name of catalog + region: spatial and magnitude region + compute_stats: whether statistics should be computed for the catalog + filters (str or list): filtering operations to apply to the catalog + metadata (dict): additional information for events + date_accessed (str): time string when catalog was accessed + """ + super().__init__() + self.filename = filename + self.catalog_id = catalog_id + self.format = format + self.name = name + self.region = region + self.compute_stats = compute_stats + self.filters = filters or [] + self.date_accessed = date_accessed or utc_now_datetime() # type datetime.datetime + + # used to store additional event information based on the event_id key, if no event_id will default to an + # integer index + self.metadata = metadata or {} + + # cleans the catalog to set as ndarray, see setter. + self.catalog = data # type: numpy.ndarray + + # use user defined stats if entered into catalog + if data is not None and self.compute_stats: + self.update_catalog_stats()
+ + + def __eq__(self, other): + """ Compares whether two catalogs are equal by comparing their dicts. """ + return self.to_dict() == other.to_dict() + + def __str__(self): + self.update_catalog_stats() + + s = f''' + Name: {self.name} + + Start Date: {self.start_time} + End Date: {self.end_time} + + Latitude: ({self.min_latitude}, {self.max_latitude}) + Longitude: ({self.min_longitude}, {self.max_longitude}) + + Min Mw: {self.min_magnitude} + Max Mw: {self.max_magnitude} + + Event Count: {self.event_count} + ''' + return s + + def to_dict(self): + """ + Serializes class to json dictionary. + + Returns: + catalog as dict + + """ + excluded = ['_catalog'] + out = {} + for k, v in self.__dict__.items(): + # note: if 'v' is callable that implies that we have a function bound to a class-member. this happens + # for the catalog forecast and requires excluding this value. + if not callable(v) and k not in excluded: + if hasattr(v, 'to_dict'): + new_v = v.to_dict() + else: + new_v = v + if k.startswith('_'): + out[k[1:]] = new_v + else: + out[k] = new_v + out['catalog'] = [] + for line in list(self.catalog.tolist()): + new_line = [] + for item in line: + # try to decode, if it fails just use original, we use this to handle string-based event_ids + try: + item = item.decode('utf-8') + except: + pass + finally: + new_line.append(item) + out['catalog'].append(new_line) + return out + + @property + def event_count(self): + """ Number of events in catalog """ + return self.get_number_of_events() + + @classmethod + def load_catalog(cls, filename, loader=csep_ascii, **kwargs): + raise NotImplementedError("subclass should implement load_catalog funtion.") + + @classmethod + def from_dict(cls, adict, **kwargs): + """ Creates a class from the dictionary representation of the class state. The catalog is serialized into a list of + tuples that contain the event information in the order defined by the dtype. + + This needs to handle reading in region information at some point. + """ + + region_loader = { + 'CartesianGrid2D': CartesianGrid2D + } + + # could these be class values? can be changed later. + exclude = ['_catalog', 'region'] + time_members = ['date_accessed', 'start_time', 'end_time'] + catalog = adict.get('catalog', None) + out = cls(data=catalog, **kwargs) + + # here we are looping over the items in the class and finding the associated value in the dict + for k, v in out.__dict__.items(): + if k not in exclude: + if k not in time_members: + try: + setattr(out, k, adict[k]) + except KeyError: + pass + else: + setattr(out, k, _none_or_datetime(adict[k])) + else: + if k == 'region': + # tries to read class id from catalog, should fail silently if catalog doesn't exist + try: + class_id = adict[k].get('class_id', None) + if class_id is None: + class_id = 'CartesianGrid2D' + setattr(out, k, region_loader[class_id].from_dict(adict[k])) + except AttributeError: + pass + return out + + @classmethod + def from_dataframe(cls, df, **kwargs): + """ + Creates catalog from dataframe. Dataframe must have columns that are equivalent to whatever fields + the catalog expects in the catalog dtype. + + For example: + + cat = CSEPCatalog() + df = cat.get_dataframe() + new_cat = CSEPCatalog.from_dataframe(df) + + Args: + df (pandas.DataFrame): pandas dataframe + **kwargs: + + Returns: + Catalog + + """ + catalog_id = None + try: + catalog_id = df['catalog_id'].iloc[0] + except KeyError: + pass + col_list = list(cls.dtype.names) + # we want this to be a structured array not a record array and only returns core attributes listed in dtype + # loses information about the region and event meta data + catalog = numpy.ascontiguousarray(df[col_list].to_records(index=False), dtype=cls.dtype) + out_cls = cls(data=catalog, catalog_id=catalog_id, **kwargs) + return out_cls + + @classmethod + def load_json(cls, filename, **kwargs): + """ Loads catalog from JSON file """ + with open(filename, 'r') as f: + adict = json.load(f) + return cls.from_dict(adict, **kwargs) + + def write_json(self, filename): + """ Writes catalog to json file + + Args: + filename (str): path to save file + """ + with open(filename, 'w') as f: + json.dump(self.to_dict(), f, indent=4, separators=(',', ': '), sort_keys=True, default=str) + + @property + def catalog(self): + return self._catalog + + @property + def data(self): + return self._catalog + + @catalog.setter + def catalog(self, val): + """ + Ensures that catalogs with formats not numpy arrray are treated as numpy.array + + Note: + This requires that catalog classes implement the self._get_catalog_as_ndarray() function. + This function should return structured numpy.ndarray. + Catalog will remain None, if assigned that way in constructor. + """ + self._catalog = val + if self._catalog is not None: + self._catalog = self._get_catalog_as_ndarray() + # ensure that people are behaving, somewhat non-pythonic but needed + if not isinstance(self._catalog, numpy.ndarray): + raise ValueError("Error: Catalog must be numpy.ndarray! Ensure that self._get_catalog_as_ndarray()" + + " returns an ndarray") + if self.compute_stats and self._catalog is not None: + self.update_catalog_stats() + + def _get_catalog_as_ndarray(self): + """ + This function will be called anytime that a catalog is assigned to self.catalog + + The purpose of this function is to ensure that the catalog is being properly parsed into the correct format, and + to prevent users of the catalog classes from assigning improper data types. + + This also acts as a convenience to allow easy assignment of different types to the catalog. The default + implementation of this function expects that the data are arranged as a collection of tuples corresponding to + the catalog data type. + """ + """ + Converts eventlist into ndarray format. + + Note: + Failure state exists if self.catalog is not bound + to instance explicity. + """ + # short-circuit + if isinstance(self.catalog, numpy.ndarray): + return self.catalog + # if catalog is not a numpy array, class must have dtype information + catalog_length = len(self.catalog) + catalog = numpy.empty(catalog_length, dtype=self.dtype) + if catalog_length == 0: + return catalog + if isinstance(self.catalog[0], (list, tuple)): + for i, event in enumerate(self.catalog): + catalog[i] = tuple(event) + elif isinstance(self.catalog[0], (comcat.SummaryEvent, geonet.SummaryEvent)): + for i, event in enumerate(self.catalog): + catalog[i] = (event.id, datetime_to_utc_epoch(event.time), + event.latitude, event.longitude, event.depth, event.magnitude) + else: + raise TypeError("Catalog data must be list of events tuples with order:\n" + f"{', '.join(self.dtype.names)} or \n" + "list of SummaryEvent type.") + return catalog + + def write_ascii(self, filename, write_header=True, write_empty=True, append=False, id_col='id'): + """ + Write catalog in csep2 ascii format. + + This format only uses the required variables from the catalog and should work by default. It can be overwritten + if an event_id (or other columns should be used). By default, the routine will look for a column the catalog array + called 'id' and will populate the event_id column with these values. If the 'id' column is not found, then it will + leave this column blank + + Short format description (comma separated values): + longitude, latitude, M, time_string format="%Y-%m-%dT%H:%M:%S.%f", depth, catalog_id, [event_id] + + Args: + filename (str): the file location to write the the ascii catalog file + write_header (bool): Write header string (default true) + write_empty (bool): Write file event if no events in catalog + append (bool): If true, append to the filename + id_col (str): name of event_id column (if included) + + Returns: + NoneType + """ + # longitude, latitude, M, epoch_time (time in millisecond since Unix epoch in GMT), depth, catalog_id, event_id + header = ['lon', 'lat', 'mag', 'time_string', 'depth', 'catalog_id', 'event_id'] + if append: + write_string = 'a' + else: + write_string = 'w' + with open(filename, write_string, newline='') as outfile: + writer = csv.DictWriter(outfile, fieldnames=header, delimiter=',') + if write_header: + writer.writeheader() + if write_empty and self.event_count == 0: + return + # create iterator from catalog columns + try: + event_ids = self.catalog[id_col] + except ValueError: + event_ids = [''] * self.event_count + row_iter = zip(self.get_longitudes(), + self.get_latitudes(), + self.get_magnitudes(), + self.get_epoch_times(), + self.get_depths(), + # populate list with `self.event_count` elements with val self.catalog_id + [self.catalog_id] * self.event_count, + event_ids) + # write csv file using DictWriter interface + for row in row_iter: + try: + event_id = row[6].decode('utf-8') + except AttributeError: + event_id = row[6] + # create dictionary for each row + adict = {'lon': row[0], + 'lat': row[1], + 'mag': row[2], + 'time_string': str(epoch_time_to_utc_datetime(row[3]).replace(tzinfo=None)).replace(' ', 'T'), + 'depth': row[4], + 'catalog_id': row[5], + 'event_id': event_id} + writer.writerow(adict) + + def to_dataframe(self, with_datetime=False): + """ + Returns pandas Dataframe describing the catalog. Explicitly casts to pandas DataFrame. + + Note: + The dataframe will be in the format of the original catalog. If you require that the + dataframe be in the CSEP ZMAP format, you must explicitly convert the catalog. + + Returns: + (pandas.DataFrame): This function must return a pandas DataFrame + + Raises: + ValueError: If self._catalog cannot be passed to pandas.DataFrame constructor, this function + must be overridden in the child class. + """ + df = pandas.DataFrame(self.catalog) + df['counts'] = 1 + df['catalog_id'] = self.catalog_id + if with_datetime: + df['datetime'] = df['origin_time'].map(epoch_time_to_utc_datetime) + df.index = df['datetime'] + # queries the region for the index of each event + if self.region is not None: + df['region_id'] = self.region.get_index_of(self.get_longitudes(), self.get_latitudes()) + try: + # bin magnitudes + df['mag_id'] = self.get_mag_idx() + except AttributeError: + pass + # set index as datetime + return df + + def get_mag_idx(self): + """ Return magnitude index from region magnitudes """ + try: + return bin1d_vec(self.get_magnitudes(), self.region.magnitudes, tol=0.00001, right_continuous=True) + except AttributeError: + raise CSEPCatalogException("Cannot return magnitude index without self.region.magnitudes") + + def get_spatial_idx(self): + """ Return spatial index of region for a longitudes and latitudes in catalog. """ + try: + region_idx = self.region.get_index_of(self.get_longitudes(), self.get_latitudes()) + except AttributeError: + raise CSEPCatalogException("Must have region information to compute region index.") + return region_idx + + def get_event_ids(self): + return self.catalog['id'] + + def get_number_of_events(self): + """ + Computes the number of events from a catalog by checking its length. + + :returns: number of events in catalog, zero if catalog is None + """ + if self.catalog is not None: + return self.catalog.shape[0] + else: + return 0 + + def get_epoch_times(self): + """ + Returns the datetime of the event as the UTC epoch time (aka unix timestamp) + """ + return self.catalog['origin_time'] + + def get_cumulative_number_of_events(self): + """ + Returns the cumulative number of events in the catalog. + + Primarily used for plotting purposes. + + Returns: + numpy.array: numpy array of the cumulative number of events, empty array if catalog is empty. + """ + return numpy.cumsum(numpy.ones(self.event_count)) + + def get_magnitudes(self): + """ + Returns magnitudes of all events in catalog + + """ + return self.catalog['magnitude'] + + def get_datetimes(self): + """ + Returns datetime object from timestamp representation in catalog + + :returns: list of timestamps from events in catalog. + """ + """ Returns datetimes from events in catalog """ + return list(map(epoch_time_to_utc_datetime, self.get_epoch_times())) + + def get_latitudes(self): + """ + Returns latitudes of all events in catalog + """ + return self.catalog['latitude'] + + def get_longitudes(self): + """ + Returns longitudes of all events in catalog + + Returns: + (numpy.array): longitudes + """ + return self.catalog['longitude'] + + def get_bbox(self): + """ + Returns bounding box of all events in the catalog + + Returns: + (numpy.array): [lon_min, lon_max, lat_min, lat_max] + """ + bbox = numpy.array([numpy.min(self.catalog['longitude']), + numpy.max(self.catalog['longitude']), + numpy.min(self.catalog['latitude']), + numpy.max(self.catalog['latitude'])]) + return bbox + + def get_depths(self): + """ Returns depths of all events in catalog """ + return self.catalog['depth'] + + def filter(self, statements=None, in_place=True): + """ + Filters the catalog based on statements. This function takes about 60% of the run-time for processing UCERF3-ETAS + simulations, so likely all other simulations. Implementations should try and limit how often this function + will be called. + + Args: + statements (str, iter): logical statements to evaluate, e.g., ['magnitude > 4.0', 'year >= 1995'] + in_place (bool): return new instance of catalog + + Returns: + self: instance of AbstractBaseCatalog, so that this function can be chained. + + """ + if not self.filters and statements is None: + raise CSEPCatalogException("Must provide filter statements to function or class to filter") + + # programmatically assign operators + operators = {'>': operator.gt, + '<': operator.lt, + '>=': operator.ge, + '<=': operator.le, + '==': operator.eq} + + # filter catalogs, implied logical and + if statements is None: + statements = self.filters + + if isinstance(statements, str): + name = statements.split(' ')[0] + if name == 'datetime': + _, oper, date, time = statements.split(' ') + name = 'origin_time' + # can be a datetime.datetime object or datetime string, if we want to support filtering on meta data it + # can happen here. but need to determine what to do if entry are not present bc meta data does not + # need to be square + value = strptime_to_utc_epoch(' '.join([date, time])) + filtered = self.catalog[operators[oper](self.catalog[name], float(value))] + else: + name, oper, value = statements.split(' ') + filtered = self.catalog[operators[oper](self.catalog[name], float(value))] + elif isinstance(statements, (list, tuple)): + # slower but at the convenience of not having to call multiple times + filters = list(statements) + filtered = numpy.copy(self.catalog) + for filt in filters: + name = filt.split(' ')[0] + # create indexing array, start with all events + if name == 'datetime': + _, oper, date, time = filt.split(' ') + # we map the requested datetime to an epoch time so we act like the user requested origin_time + name = 'origin_time' + value = strptime_to_utc_epoch(' '.join([date, time])) + filtered = filtered[operators[oper](filtered[name], float(value))] + else: + name, oper, value = filt.split(' ') + filtered = filtered[operators[oper](filtered[name], float(value))] + else: + raise ValueError('statements should be either a string or list or tuple of strings') + # can return new instance of class or original instance + self.filters = statements + if in_place: + self.catalog = filtered + return self + else: + # make and return new object + cls = self.__class__ + inst = cls(data=filtered, catalog_id=self.catalog_id, format=self.format, name=self.name, + region=self.region, filters=statements) + return inst + + def filter_spatial(self, region=None, update_stats=False, in_place=True): + """ + Removes events outside of the region. This takes some time and should be used sparingly. Typically for isolating a region + near the mainshock or inside a testing region. This should not be used to create gridded style data sets. + + Args: + region: csep.utils.spatial.Region + update_stats (bool): if true will update catalog statistics + in_place (bool): if false, will create a new instance of the catalog preserving state + + Returns: + self + + """ + if region is None and self.region is None: + raise CSEPCatalogException("region provided to function or bound to the catalog instance.") + + # update the region to the new region + if region is not None: + self.region = region + + mask = self.region.get_masked(self.get_longitudes(), self.get_latitudes()) + # logical index uses opposite boolean values than masked arrays. + + filtered = self.catalog[~mask] + if in_place: + self.catalog = filtered + if update_stats: + self.update_catalog_stats() + return self + else: + cls = self.__class__ + inst = cls(data=filtered, catalog_id=self.catalog_id, format=self.format, name=self.name, + region=self.region, compute_stats=update_stats) + return inst + + def apply_mct(self, m_main, event_epoch, mc=2.5): + """ + Applies time-dependent magnitude of completeness following a mainshock. Taken + from Eq. (15) from Helmstetter et al., 2006. + + Args: + m_main (float): mainshock magnitude + event_epoch: epoch time in millis of event + mc (float): mag_completeness + + Returns: + """ + + def compute_mct(t, m): + return m - 4.5 - 0.75 * numpy.log10(t) + + # compute critical time for efficiency + t_crit_days = 10 ** -((mc - m_main + 4.5) / 0.75) + t_crit_millis = days_to_millis(t_crit_days) + + times = self.get_epoch_times() + mws = self.get_magnitudes() + + # catalogs are stored stored by milliseconds + t_crit_epoch = t_crit_millis + event_epoch + + # another short-circuit, again assumes that catalogs are sorted in time + if times[0] > t_crit_epoch: + return self + + # this is used to index the array, starting with accepting all events + filter = numpy.ones(self.event_count, dtype=bool) + for i, (mw, time) in enumerate(zip(mws, times)): + # we can break bc events are sorted in time + if time > t_crit_epoch: + break + if time < event_epoch: + continue + time_from_mshock_in_days = millis_to_days(time - event_epoch) + mct = compute_mct(time_from_mshock_in_days, m_main) + # ignore events with mw < mct + if mw < mct: + filter[i] = False + + filtered = self.catalog[filter] + self.catalog = filtered + return self + + def get_csep_format(self): + """ + This method should be overwritten for catalog formats that do not adhere to the CSEP ZMAP catalog format. For + those that do, this method will return the catalog as is. + + """ + raise NotImplementedError('get_csep_format() not implemented.') + + def update_catalog_stats(self): + """ Compute summary statistics of events in catalog """ + # update min and max values + self.min_magnitude = min_or_none(self.get_magnitudes()) + self.max_magnitude = max_or_none(self.get_magnitudes()) + self.min_latitude = min_or_none(self.get_latitudes()) + self.max_latitude = max_or_none(self.get_latitudes()) + self.min_longitude = min_or_none(self.get_longitudes()) + self.max_longitude = max_or_none(self.get_longitudes()) + self.start_time = epoch_time_to_utc_datetime(min_or_none(self.get_epoch_times())) + self.end_time = epoch_time_to_utc_datetime(max_or_none(self.get_epoch_times())) + + def spatial_counts(self): + """ + Returns counts of events within discrete spatial region + + We figure out the index of the polygons and create a map that relates the spatial coordinate in the + Cartesian grid with with the polygon in region. + + Returns: + ndarray containing the event count in each spatial bin + """ + # make sure region is specified with catalog + if self.event_count == 0: + return numpy.zeros(self.region.num_nodes) + + if self.region is None: + raise CSEPSchedulerException("Cannot create binned rates without region information.") + + n_poly = self.region.num_nodes + event_counts = numpy.zeros(n_poly) + # this function could throw ValueError if points are outside of the region + idx = self.region.get_index_of(self.get_longitudes(), self.get_latitudes()) + numpy.add.at(event_counts, idx, 1) + return event_counts + + def spatial_event_probability(self): + # make sure region is specified with catalog + if self.event_count == 0: + return numpy.zeros(self.region.num_nodes) + + if self.region is None: + raise CSEPSchedulerException("Cannot create binned probabilities without region information.") + + n_poly = self.region.num_nodes + event_flag = numpy.zeros(n_poly) + idx = self.region.get_index_of(self.get_longitudes(), self.get_latitudes()) + event_flag[idx] = 1 + return event_flag + + def magnitude_counts(self, mag_bins=None, tol=0.00001, retbins=False): + """ Computes the count of events within mag_bins + + + Args: + mag_bins: uses csep.utils.constants.CSEP_MW_BINS as default magnitude bins + retbins (bool): if this is true, return the bins used + + Returns: + numpy.ndarray: showing the counts of hte events in each magnitude bin + """ + # todo: keep track of events that are ignored + if mag_bins is None: + try: + # a forecast is a type of region, but region does not need a magnitude + mag_bins = self.region.magnitudes + except AttributeError: + # use default magnitude bins from csep + mag_bins = CSEP_MW_BINS + self.region.magnitudes = mag_bins + self.region.num_mag_bins = len(mag_bins) + + out = numpy.zeros(len(mag_bins)) + if self.event_count == 0: + if retbins: + return (mag_bins, out) + else: + return out + idx = bin1d_vec(self.get_magnitudes(), mag_bins, tol=tol, right_continuous=True) + numpy.add.at(out, idx, 1) + if retbins: + return (mag_bins, out) + else: + return out + + def spatial_magnitude_counts(self, mag_bins=None, tol=0.00001): + """ Return counts of events in space-magnitude region. + + We figure out the index of the polygons and create a map that relates the spatial coordinate in the + Cartesian grid with the polygon in region. + + Args: + mag_bins (list, numpy.array): magnitude bins (optional), if empty tries to use magnitude bins associated with region + tol (float): tolerance for comparisons within magnitude bins + + Returns: + output: unnormalized event count in each bin, 1d ndarray where index corresponds to midpoints + + """ + # make sure region is specified with catalog + if self.region is None: + raise CSEPCatalogException("Cannot create binned rates without region information.") + + if self.region.magnitudes is None and mag_bins is None: + raise CSEPCatalogException("Region must have magnitudes or mag_bins must be defined to " + "compute space magnitude binning.") + # prefer user supplied mag_bins + if mag_bins is None: + mag_bins = self.region.magnitudes + # short-circuit if zero-events in catalog... return array of zeros + n_poly = self.region.num_nodes + event_counts = numpy.zeros((n_poly, len(mag_bins))) + if self.event_count != 0: + # this will throw ValueError if points outside range + spatial_idx = self.region.get_index_of(self.get_longitudes(), self.get_latitudes()) + # also throwing the same error + mag_idx = bin1d_vec(self.get_magnitudes(), mag_bins, tol=tol, right_continuous=True) + for idx in range(spatial_idx.shape[0]): + if mag_idx[idx] == -1: + raise ValueError("at least one magnitude value outside of the valid region.") + event_counts[(spatial_idx[idx], mag_idx[idx])] += 1 + return event_counts + + def length_in_seconds(self): + """ Returns catalog length in seconds assuming that the catalog is sorted by time. """ + dts = self.get_datetimes() + elapsed_time = (dts[-1] - dts[0]).total_seconds() + return elapsed_time + + def get_bvalue(self, mag_bins=None, return_error=True): + """ + Estimates the b-value of a catalog from Marzocchi and Sandri (2003). First, tries to use the magnitude bins + provided to the function. If those are not provided, tries the magnitude bins associated with the region. + If that fails, uses the default magnitude bins provided in constants. + + Args: + mag_bins (list or array_like): monotonically increasing set of magnitude bin edges + return_error (bool): returns errors + + Returns: + bval (float): b-value + err (float): std. err + """ + + if self.get_number_of_events() == 0: + return None + + # this might fail if magnitudes are not aligned + if mag_bins is None: + try: + mag_bins = self.region.magnitudes + except AttributeError: + mag_bins = CSEP_MW_BINS + mws = discretize(self.get_magnitudes(), mag_bins) + dmw = mag_bins[1] - mag_bins[0] + + # compute the p term from eq 3.10 in marzocchi and sandri [2003] + def p(): + top = dmw + bottom = numpy.mean(mws) - numpy.min(mws) + # this might happen if all mags are the same, or 1 event in catalog + if bottom == 0: + return None + return 1 + top / bottom + + bottom = numpy.log(10) * dmw + p = p() + if p is None: + return None + bval = 1.0 / bottom * numpy.log(p) + if return_error: + err = (1 - p) / (numpy.log(10) * dmw * numpy.sqrt(self.event_count * p)) + return (bval, err) + else: + return bval + + def b_positive(self): + """ Implements the b-positive indicator from Nicholas van der Elst """ + pass + + def plot(self, ax=None, show=False, extent=None, set_global=False, plot_args=None): + """ Plot catalog according to plate-carree projection + + Args: + ax (`matplotlib.pyplot.axes`): Previous axes onto which catalog can be drawn + show (bool): if true, show the figure. this call is blocking. + extent (list): Force an extent [lon_min, lon_max, lat_min, lat_max] + plot_args (optional/dict): dictionary containing plotting arguments for making figures + + Returns: + axes: matplotlib.Axes.axes + """ + + # no mutable function arguments + plot_args_default = { + 'basemap': 'ESRI_terrain', + 'markersize': 2, + 'markercolor': 'red', + 'alpha': 0.3, + 'mag_scale': 7, + 'legend': True, + 'grid_labels': True, + 'legend_loc': 3, + 'figsize': (8, 8), + 'title': self.name, + 'mag_ticks': False + } + + # Plot the region border (if it exists) by default + try: + # This will throw error if catalog does not have region + _ = self.region.num_nodes + plot_args_default['region_border'] = True + except AttributeError: + pass + + plot_args = plot_args or {} + plot_args_default.update(plot_args) + + # this call requires internet connection and basemap + ax = plot_catalog(self, ax=ax,show=show, extent=extent, + set_global=set_global, plot_args=plot_args_default) + return ax
+ + + +
+[docs] +class CSEPCatalog(AbstractBaseCatalog): + """ + Standard catalog class for PyCSEP catalog operations. + + """ + + dtype = numpy.dtype([('id', 'S256'), + ('origin_time', '<i8'), + ('latitude', '<f8'), + ('longitude', '<f8'), + ('depth', '<f8'), + ('magnitude', '<f8')]) + +
+[docs] + def __init__(self, **kwargs): + + """ Standard catalog format for CSEP catalogs. Primary event data are stored in structured numpy array. Additional + metadata are available by the event_id in the catalog metadata information. + + Args: + filename: location of catalog + catalog (numpy.ndarray or eventlist): catalog data + catalog_id: catalog id number (used for stochastic event set forecasts) + format: identification used for serialization + name: human readable name of catalog + region: spatial and magnitude region + compute_stats: whether statistics should be computed for the catalog + filters (str or list): filtering operations to apply to the catalog + metadata (dict): additional information for events + date_accessed (str): time string when catalog was accessed + """ + super().__init__(**kwargs)
+ + +
+[docs] + @classmethod + def load_ascii_catalogs(cls, filename, **kwargs): + """ Loads multiple catalogs in csep-ascii format. + + This function can load multiple catalogs stored in a single file. This typically called to + load a catalog-based forecast, but could also load a collection of catalogs stored in the same file + + Args: + filename (str): filepath or directory of catalog files + **kwargs (dict): passed to class constructor + + Return: + yields CSEPCatalog class + """ + + def parse_filename(filename): + # this works for unix + basename = str(os.path.basename(filename.rstrip('/')).split('.')[0]) + split_fname = basename.split('_') + name = split_fname[0] + start_time = strptime_to_utc_datetime(split_fname[1], format="%Y-%m-%dT%H-%M-%S-%f") + return (name, start_time) + + def read_float(val): + """Returns val as float or None if unable""" + try: + val = float(val) + except: + val = None + return val + + def is_header_line(line): + if line[0].lower() == 'lon': + return True + else: + return False + + def read_catalog_line(line): + # convert to correct types + lon = read_float(line[0]) + lat = read_float(line[1]) + magnitude = read_float(line[2]) + # maybe fractional seconds are not included + origin_time = line[3] + if origin_time: + try: + origin_time = strptime_to_utc_epoch(line[3], format='%Y-%m-%dT%H:%M:%S.%f') + except ValueError: + origin_time = strptime_to_utc_epoch(line[3], format='%Y-%m-%dT%H:%M:%S') + depth = read_float(line[4]) + catalog_id = int(line[5]) + event_id = line[6] + # temporary event + temp_event = (event_id, origin_time, lat, lon, depth, magnitude) + return temp_event, catalog_id + + # overwrite filename, if user specifies + try: + name_from_file, start_time = parse_filename(filename) + kwargs.setdefault('name', name_from_file) + except: + pass + + # handle all catalogs in single file + if os.path.isfile(filename): + with open(filename, 'r', newline='') as input_file: + catalog_reader = csv.reader(input_file, delimiter=',') + # csv treats everything as a string convert to correct types + events = [] + # all catalogs should start at zero + prev_id = None + for line in catalog_reader: + # skip header line on first read if included in file + if prev_id is None: + if is_header_line(line): + continue + # read line and return catalog id + temp_event, catalog_id = read_catalog_line(line) + empty = False + # OK if event_id is empty + if all([val in (None, '') for val in temp_event[1:]]): + empty = True + # first event is when prev_id is none, catalog_id should always start at zero + if prev_id is None: + prev_id = 0 + # if the first catalog doesn't start at zero + if catalog_id != prev_id: + if not empty: + events = [temp_event] + else: + events = [] + for id in range(catalog_id): + yield cls(data=[], catalog_id=id, **kwargs) + prev_id = catalog_id + continue + # accumulate event if catalog_id is the same as previous event + if catalog_id == prev_id: + if not all([val in (None, '') for val in temp_event]): + events.append(temp_event) + prev_id = catalog_id + # create and yield class if the events are from different catalogs + elif catalog_id == prev_id + 1: + yield cls(data=events, catalog_id=prev_id, **kwargs) + # add event to new event list + if not empty: + events = [temp_event] + else: + events = [] + prev_id = catalog_id + # this implies there are empty catalogs, because they are not listed in the ascii file + elif catalog_id > prev_id + 1: + yield cls(data=events, catalog_id=prev_id, **kwargs) + # if prev_id = 0 and catalog_id = 2, then we skipped one catalog. thus, we skip catalog_id - prev_id - 1 catalogs + num_empty_catalogs = catalog_id - prev_id - 1 + # first yield empty catalog classes + for id in range(num_empty_catalogs): + yield cls(data=[], catalog_id=catalog_id - num_empty_catalogs + id, **kwargs) + prev_id = catalog_id + # add event to new event list + if not empty: + events = [temp_event] + else: + events = [] + else: + raise ValueError( + "catalog_id should be monotonically increasing and events should be ordered by catalog_id") + # yield final catalog, note: since this is just loading catalogs, it has no idea how many should be there + cat = cls(data=events, catalog_id=prev_id, **kwargs) + yield cat + + elif os.path.isdir(filename): + raise NotImplementedError("reading from directory or batched files not implemented yet!")
+ + +
+[docs] + @classmethod + def load_catalog(cls, filename, loader=csep_ascii, **kwargs): + """ Loads catalog stored in CSEP1 ascii format """ + catalog_id = None + try: + event_list, catalog_id = loader(filename, return_catalog_id=True) + except TypeError: + event_list = loader(filename) + new_class = cls(data=event_list, catalog_id=catalog_id, **kwargs) + return new_class
+ + +
+[docs] + def get_csep_format(self): + """ Returns CSEP format for a catalog + + This catalog is already in CSEP format so it will return self. + + Returns: + self + + """ + return self
+
+ + + +
+[docs] +class UCERF3Catalog(AbstractBaseCatalog): + """ + Catalog written from UCERF3-ETAS binary format + + :var header_dtype: numpy.dtype description of synthetic catalog header. + :var event_dtype: numpy.dtype description of ucerf3 catalog format + """ + # binary format of UCERF3 catalog + header_dtype = numpy.dtype([("file_version", ">i2"), ("catalog_size", ">i4")]) + +
+[docs] + def __init__(self, **kwargs): + # initialize parent constructor + super().__init__(**kwargs)
+ + + @classmethod + def load_catalogs(cls, filename, **kwargs): + """ + Loads catalogs based on the merged binary file format of UCERF3. File format is described at + https://scec.usc.edu/scecpedia/CSEP2_Storing_Stochastic_Event_Sets#Introduction. + + There is also the load_catalog method that will work on the individual binary output of the UCERF3-ETAS + model. + + Args: + filename (str): filename of binary stochastic event set + kwargs (dict): keyword arguments to pass to class constructor + Returns: + list of catalogs of type UCERF3Catalog + """ + + # handle uncompressed binary file + if get_file_extension(filename) == 'bin': + with open(filename, 'rb') as catalog_file: + # parse 4byte header from merged file + number_simulations_in_set = numpy.fromfile(catalog_file, dtype='>i4', count=1)[0] + # load all catalogs from merged file + for catalog_id in range(number_simulations_in_set): + version = numpy.fromfile(catalog_file, dtype=">i2", count=1)[0] + dtype = cls._get_header_dtype(version) + header = numpy.fromfile(catalog_file, dtype=dtype, count=1) + catalog_size = header['catalog_size'][0] + # read catalog + catalog = numpy.fromfile(catalog_file, dtype=cls._get_catalog_dtype(version), count=catalog_size) + # add column that stores catalog_id in case we want to store in database + u3_catalog = cls(filename=filename, data=catalog, catalog_id=catalog_id, **kwargs) + u3_catalog.dtype = dtype + yield u3_catalog + + # handle compressed file by decompressing inline + elif get_file_extension(filename) == 'gz': + with gzip.open(filename, 'rb') as catalog_file: + number_simulations_in_set = numpy.frombuffer(catalog_file.read(4), dtype='>i4')[0] + for catalog_id in range(number_simulations_in_set): + version = numpy.frombuffer(catalog_file.read(2), dtype='>i2')[0] + dtype = cls._get_header_dtype(version) + header_bytes = dtype.itemsize + header = numpy.frombuffer(catalog_file.read(header_bytes), dtype=dtype) + catalog_size = header['catalog_size'][0] + catalog_dtype = cls._get_catalog_dtype(version) + event_bytes = catalog_dtype.itemsize + catalog = numpy.frombuffer( + catalog_file.read(event_bytes*catalog_size), + dtype=catalog_dtype, + count=catalog_size + ) + u3_catalog = cls(filename=filename, data=catalog, catalog_id=catalog_id, **kwargs) + u3_catalog.dtype = dtype + yield u3_catalog + + @classmethod + def load_catalog(cls, filename, loader=None, **kwargs): + version = numpy.fromfile(filename, dtype=">i2", count=1)[0] + header = numpy.fromfile(filename, dtype=cls._get_header_dtype(version), count=1) + catalog_size = header['catalog_size'][0] + # assign dtype to make sure that its bound to the instance of the class + dtype = cls._get_catalog_dtype(version) + event_list = numpy.fromfile(filename, dtype=cls._get_catalog_dtype(version), count=catalog_size) + new_class = cls(filename=filename, data=event_list, **kwargs) + new_class.dtype = dtype + return new_class + + def get_csep_format(self): + n = len(self.catalog) + # allocate array for csep catalog + csep_catalog = numpy.zeros(n, dtype=CSEPCatalog.dtype) + for i, event in enumerate(self.catalog): + csep_catalog[i] = (i, + event['origin_time'], + event['latitude'], + event['longitude'], + event['depth'], + event['magnitude']) + return CSEPCatalog(data=csep_catalog, catalog_id=self.catalog_id, filename=self.filename, format='csep', + name=self.name, region=self.region, compute_stats=self.compute_stats, filters=self.filters, + metadata=self.metadata, date_accessed=self.date_accessed) + + @staticmethod + def _get_catalog_dtype(version): + """ + Get catalog dtype from version number + + Args: + version: + + Returns: + + """ + + if version == 1: + dtype = numpy.dtype([("rupture_id", ">i4"), + ("parent_id", ">i4"), + ("generation", ">i2"), + ("origin_time", ">i8"), + ("latitude", ">f8"), + ("longitude", ">f8"), + ("depth", ">f8"), + ("magnitude", ">f8"), + ("dist_to_parent", ">f8"), + ("erf_index", ">i4"), + ("fss_index", ">i4"), + ("grid_node_index", ">i4")]) + + elif version >= 2: + dtype = numpy.dtype([("rupture_id", ">i4"), + ("parent_id", ">i4"), + ("generation", ">i2"), + ("origin_time", ">i8"), + ("latitude", ">f8"), + ("longitude", ">f8"), + ("depth", ">f8"), + ("magnitude", ">f8"), + ("dist_to_parent", ">f8"), + ("erf_index", ">i4"), + ("fss_index", ">i4"), + ("grid_node_index", ">i4"), + ("etas_k", ">f8")]) + else: + raise ValueError("unknown catalog version, cannot read catalog.") + return dtype + + @staticmethod + def _get_header_dtype(version): + + if version == 1 or version == 2: + dtype = numpy.dtype([("catalog_size", ">i4")]) + + elif version >= 3: + dtype = numpy.dtype([("num_orignal_ruptures", ">i4"), + ("seed", ">i8"), + ("index", ">i4"), + ("hist_rupt_start_id", ">i4"), + ("hist_rupt_end_id", ">i4"), + ("trig_rupt_start_id", ">i4"), + ("trig_rupt_end_id", ">i4"), + ("sim_start_epoch", ">i8"), + ("sim_end_epoch", ">i8"), + ("num_spont", ">i4"), + ("num_supraseis", ">i4"), + ("min_mag", ">f8"), + ("max_mag", ">f8"), + ("catalog_size", ">i4")]) + else: + raise ValueError("unknown catalog version, cannot parse catalog header.") + return dtype
+ + +# helps to parse time-strings +def _none_or_datetime(value): + if isinstance(value, datetime.datetime): + return value + if value is not None: + format = parse_string_format(value) + value = strptime_to_utc_datetime(value, format=format) + return value +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/csep/core/forecasts.html b/_modules/csep/core/forecasts.html new file mode 100644 index 00000000..e9dc78e7 --- /dev/null +++ b/_modules/csep/core/forecasts.html @@ -0,0 +1,973 @@ + + + + + + + + csep.core.forecasts — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep.core.forecasts

+import itertools
+import time
+import os
+import datetime
+
+# third-party imports
+import numpy
+
+from csep.utils.log import LoggingMixin
+from csep.core.regions import CartesianGrid2D, create_space_magnitude_region
+from csep.models import Polygon
+from csep.utils.calc import bin1d_vec
+from csep.utils.time_utils import decimal_year, datetime_to_utc_epoch
+from csep.core.catalogs import AbstractBaseCatalog
+from csep.utils.constants import SECONDS_PER_ASTRONOMICAL_YEAR
+from csep.utils.plots import plot_spatial_dataset
+
+
+# idea: should this be a SpatialDataSet and the class below SpaceMagnitudeDataSet, bc of functions like
+#       get_latitudes(), and get_longitudes()
+#       or this class should be refactored as to use the underlying region
+
+# idea: this needs to handle non-carteisan regions, so maybe (lons, lats) should be a single variable like locations
+
+# note: these are specified to 2D data sets and some minor refactoring needs to happen here.
+
+# todo: add mask to dataset that has the shape of data. consider using numpy.ma module to hold these values
+class GriddedDataSet(LoggingMixin):
+    """Represents space-magnitude discretized seismicity implementation.
+
+    Map-based and discrete forecasts, such as those provided by time-independent forecasts can be stored using this format.
+    This object will provide some convenience routines and functionality around the numpy.ndarray primative that
+    actually stores the space-time magnitude forecast.
+
+    Earthquake forecasts based on discrete space-magnitude regions are read into this format by default. This format can
+    support multiple types of region including 2d and 3d cartesian meshes. The appropriate region must be provided. By default
+    the magniutde is always treated as the 'fast' dimension of the numpy.array.
+
+    Attributes:
+        data (numpy.ndarray): 2d numpy.ndarray containing the spatial and magnitude bins with magnitudes being the fast dimension.
+        region: csep.utils.spatial.2DCartesianGrid class containing the mapping of the data points into the region.
+        mags: list or numpy.ndarray class containing the lower (inclusive) magnitude values from the discretized
+              magnitudes. The magnitude bins should be regularly spaced.
+    """
+
+    def __init__(self, data=None, region=None, name=None):
+        """ Constructs GriddedSeismicity class.
+
+        Attributes:
+            data (numpy.ndarray): numpy.ndarray
+            region:
+            name:
+            time_horizon:
+        """
+        super().__init__()
+
+        # note: do not access this member through _data, always use .data.
+        self._data = data
+        self.region = region
+        self.name = name
+
+        # this value lets us scale the forecast without much additional memory constraints and makes the calls
+        # idempotent
+        self._scale = 1
+
+    @property
+    def data(self):
+        """ Contains the spatio-magnitude forecast as 2d numpy.ndarray.
+
+        The dimensions of this array are (num_spatial_bins, num_magnitude_bins). The spatial bins can be indexed through
+        a look up table as part of the region class. The magnitude bins used are stored as directly as an attribute of
+        class.
+        """
+        return self._data * self._scale
+
+    @property
+    def event_count(self):
+        """ Returns a sum of the forecast data """
+        return self.sum()
+
+    def sum(self):
+        """ Sums over all of the forecast data"""
+        return numpy.sum(self.data)
+
+    def spatial_counts(self, cartesian=False):
+        """ Returns the counts (or rates) of earthquakes within each spatial bin.
+
+        Args:
+            cartesian (bool): if true, will return a 2d grid representing the bounding box of the forecast
+
+        Returns:
+            ndarray containing the count in each bin
+
+        """
+        if cartesian:
+            return self.region.get_cartesian(self.data)
+        else:
+            return self.data
+
+    def get_latitudes(self):
+        """ Returns the latitude of the lower left node of the spatial grid"""
+        return self.region.origins()[:,1]
+
+    def get_longitudes(self):
+        """ Returns the lognitude of the lower left node of the spatial grid """
+        return self.region.origins()[:,0]
+
+    def get_valid_midpoints(self):
+        """ Returns the midpoints of the valid testing region
+
+            Returns:
+                lons (numpy.array), lats (numpy.array): two numpy arrays containing the valid midpoints from the forecast
+        """
+        latitudes = []
+        longitudes = []
+        for idx in range(self.region.num_nodes):
+            if self.region.bbox_max[idx] == 0:
+                latitudes.append(self.region.midpoints()[idx,1])
+                longitudes.append(self.region.midpoints()[idx,0])
+        return numpy.array(longitudes), numpy.array(latitudes)
+
+    @property
+    def polygons(self):
+        return self.region.polygons
+
+    def get_index_of(self, lons, lats):
+        """ Returns the index of lons, lats in spatial region
+
+        See csep.utils.spatial.CartesianGrid2D for more details.
+
+        Args:
+            lons: ndarray-like
+            lats: ndarray-like
+
+        Returns:
+            idx: ndarray-like
+
+        Raises:
+            ValueError: if lons or lats are outside of the region.
+        """
+        return self.region.get_index_of(lons, lats)
+
+    def scale(self, val):
+        """Scales forecast by floating point value.
+
+        Args:
+            val: int, float, or ndarray. This value multiplicatively scale the values in forecast. Use a value of
+                 1 to recover original value of the forecast.
+
+        Raises:
+            ValueError: must be int, float, or ndarray
+        """
+        self._scale = val
+        return self
+
+    def to_dict(self):
+        return
+
+    @classmethod
+    def from_dict(cls, adict):
+        raise NotImplementedError()
+
+
+class MarkedGriddedDataSet(GriddedDataSet):
+    """
+    Represents a gridded forecast in CSEP. The data must be stored as a 2D numpy array where the fast column is magnitude.
+    The shape of this array will have (n_space_bins, n_mag_bins) and can be large.
+
+    """
+
+    def __init__(self, magnitudes=None, *args, **kwargs):
+        super().__init__(*args, **kwargs)
+        self.region = create_space_magnitude_region(self.region, magnitudes)
+
+    @property
+    def magnitudes(self):
+        return self.region.magnitudes
+
+    @property
+    def min_magnitude(self):
+        """ Returns the lowest magnitude bin edge """
+        return numpy.min(self.magnitudes)
+
+    @property
+    def num_mag_bins(self):
+        return len(self.magnitudes)
+
+    def get_magnitudes(self):
+        """ Returns the left edge of the magnitude bins. """
+        return self.magnitudes
+
+    @property
+    def num_nodes(self):
+        return self.region.num_nodes
+
+    def spatial_counts(self, cartesian=False):
+        """
+        Integrates over magnitudes to return the spatial version of the forecast.
+
+        Args:
+            cartesian (bool): if true, will return a 2d grid representing the bounding box of the forecast
+
+        Returns:
+            ndarray containing the count in each bin
+
+        """
+        if cartesian:
+            return self.region.get_cartesian(numpy.sum(self.data, axis=1))
+        else:
+            return numpy.sum(self.data, axis=1)
+
+    def magnitude_counts(self):
+        """ Returns counts of events in magnitude bins """
+        return numpy.sum(self.data, axis=0)
+
+    def get_magnitude_index(self, mags, tol=0.00001):
+        """ Returns the indices into the magnitude bins of selected magnitudes
+
+        Note: the right-most bin is treated as extending to infinity.
+
+        Args:
+            mags (array-like): list of magnitudes
+
+        Returns:
+            idm (array-like): indices corresponding to mags
+
+        Raises:
+            ValueError
+        """
+        idm = bin1d_vec(mags, self.magnitudes, tol=tol, right_continuous=True)
+        if numpy.any(idm == -1):
+            raise ValueError("mags outside the range of forecast magnitudes.")
+        return idm
+
+
+
+[docs] +class GriddedForecast(MarkedGriddedDataSet): + """ Class to represent grid-based forecasts """ + +
+[docs] + def __init__(self, start_time=None, end_time=None, *args, **kwargs): + """ + Constructor for GriddedForecast class + + Args: + start_time (datetime.datetime): + end_time (datetime.datetime): + """ + super().__init__(*args, **kwargs) + self.start_time = start_time + self.end_time = end_time
+ + +
+[docs] + def scale_to_test_date(self, test_datetime): + """ Scales forecast data by the fraction of the date. + + Uses the concept of decimal years to keep track of leap years. See the csep.utils.time_utils.decimal_year for + details on the implementation. If datetime is before the start_date or after the end_date, we will scale the + forecast by unity. + + These datetime objects can be timezone aware in UTC timezone or both not time aware. This function will raise a + TypeError according to the specifications of datetime module if these conditions are not met. + + Args: + test_datetime (datetime.datetime): date to scale the forecast + in_place (bool): if false, creates a deep copy of the object and scales that instead + """ + + # Note: this will throw a TypeError if datetimes are not either both time aware or not time aware. + if test_datetime >= self.end_time: + return self + + if test_datetime <= self.start_time: + return self + + fore_dur = decimal_year(self.end_time) - decimal_year(self.start_time) + + # we are adding one day, bc tests are considered to occur at the end of the day specified by test_datetime. + test_date_dec = decimal_year(test_datetime + datetime.timedelta(1)) + fore_frac = (test_date_dec - decimal_year(self.start_time)) / fore_dur + res = self.scale(fore_frac) + return res
+ + +
+[docs] + def target_event_rates(self, target_catalog, scale=False): + """ Generates data set of target event rates given a target data. + + The data should already be scaled to the same length as the forecast time horizon. Explicit checks for these + cases are not conducted in this function. + + If scale=True then the target event rates will be scaled down to the rates for one day. This choice of time + can be made without a loss of generality. Please see Rhoades, D. A., D. Schorlemmer, M. C. Gerstenberger, + A. Christophersen, J. D. Zechar, and M. Imoto (2011). Efficient testing of earthquake forecasting models, + Acta Geophys 59 728-747. + + Args: + target_catalog (csep.core.data.AbstractBaseCatalog): data containing target events + scale (bool): if true, rates will be scaled to one day. + + Returns: + out (tuple): target_event_rates, n_fore. target event rates are the + + """ + if not isinstance(target_catalog, AbstractBaseCatalog): + raise TypeError("target_catalog must be csep.core.data.AbstractBaseCatalog class.") + + if scale: + # first get copy so we dont contaminate the rates of the forecast, this can be quite large for global files + # if we run into memory problems, we can implement a sparse form of the forecast. + data = numpy.copy(self.data) + # straight-forward implementation, relies on correct start and end time + elapsed_days = (self.end_time - self.start_time).days + # scale the data down to days + data = data / elapsed_days + else: + # just pull reference to stored data + data = self.data + + # get longitudes and latitudes of target events + lons = target_catalog.get_longitudes() + lats = target_catalog.get_latitudes() + mags = target_catalog.get_magnitudes() + + # this array does not keep track of any location anymore. however, it can be computed using the data again. + rates = self.get_rates(lons, lats, mags, data=data) + # we return the sum of data, bc data might be scaled within this function + return rates, numpy.sum(data)
+ + +
+[docs] + def get_rates(self, lons, lats, mags, data=None, ret_inds=False): + """ Returns the rate associated with a longitude, latitude, and magnitude. + + Args: + lon: longitude of interest + lat: latitude of interest + mag: magnitude of interest + data: optional, if not none then use this data value provided with the forecast + + Returns: + rates (float or ndarray) + + Raises: + RuntimeError: lons lats and mags must be the same length + """ + if len(lons) != len(lats) and len(lats) != len(mags): + raise RuntimeError("lons, lats, and mags must have the same length.") + # get index of lon and lat value, if lats, lons, not in region raise value error + idx = self.get_index_of(lons, lats) + # get index of magnitude bins, if lats, lons, not in region raise value error + idm = self.get_magnitude_index(mags) + # retrieve rates from internal data structure + if data is None: + rates = self.data[idx,idm] + else: + rates = data[idx,idm] + if ret_inds: + return rates, (idx, idm) + else: + return rates
+ + +
+[docs] + @classmethod + def from_custom(cls, func, func_args=(), **kwargs): + """ Creates MarkedGriddedDataSet class from custom parsing function. + + Args: + func (callable): function will be called as func(*func_args). + func_args (tuple): arguments to pass to func + **kwargs: keyword arguments to pass to the GriddedForecast class constructor. + + Returns: + :class:`csep.core.forecasts.GriddedForecast`: forecast object + + Note: + The loader function `func` needs to return a tuple that contains (data, region, magnitudes). data is a + :class:`numpy.ndarray`, region is a :class:`CartesianGrid2D<csep.core.regions.CartesianGrid2D>`, and + magnitudes are a :class:`numpy.ndarray` consisting of the magnitude bin edges. See the function + :meth:`load_ascii<csep.core.forecasts.GriddedForecast.load_ascii>` for an example. + + """ + data, region, magnitudes = func(*func_args) + # try to ensure that data are region are compatible with one another, but we can only rely on heuristics + return cls(data=data, region=region, magnitudes=magnitudes, **kwargs)
+ + +
+[docs] + @classmethod + def load_ascii(cls, ascii_fname, start_date=None, end_date=None, name=None, swap_latlon=False): + """ Reads Forecast file from CSEP1 ascii format. + + The ascii format from CSEP1 testing centers. The ASCII format does not contain headers. The format is listed here: + + Lon_0, Lon_1, Lat_0, Lat_1, z_0, z_1, Mag_0, Mag_1, Rate, Flag + + For the purposes of defining region objects and magnitude bins use the Lat_0 and Lon_0 values along with Mag_0. + We can assume that the magnitude bins are regularly spaced to allow us to compute Deltas. + + The file is row-ordered so that magnitude bins are fastest then followed by space. + + Args: + ascii_fname: file name of csep forecast in .dat format + swap_latlon (bool): if true, read forecast spatial cells as lat_0, lat_1, lon_0, lon_1 + """ + # Load data + data = numpy.loadtxt(ascii_fname) + # this is very ugly, but since unique returns a sorted list, we want to get the index, sort that and then return + # from the original array. same for magnitudes below. + all_polys = data[:, :4] + all_poly_mask = data[:, -1] + sorted_idx = numpy.sort(numpy.unique(all_polys, return_index=True, axis=0)[1], kind='stable') + unique_poly = all_polys[sorted_idx] + # gives the flag for a spatial cell in the order it was presented in the file + poly_mask = all_poly_mask[sorted_idx] + # create magnitudes bins using Mag_0, ignoring Mag_1 bc they are regular until last bin. we dont want binary search for this + all_mws = data[:, -4] + sorted_idx = numpy.sort(numpy.unique(all_mws, return_index=True)[1], kind='stable') + mws = all_mws[sorted_idx] + # csep1 stores the lat lons as min values and not (x,y) tuples + if swap_latlon: + bboxes = [((i[2], i[0]), (i[3], i[0]), (i[3], i[1]), (i[2], i[1])) for i in unique_poly] + else: + bboxes = [((i[0], i[2]), (i[0], i[3]), (i[1], i[3]), (i[1], i[2])) for i in unique_poly] + # the spatial cells are arranged fast in latitude, so this only works for the specific csep1 file format + dh = float(unique_poly[0, 3] - unique_poly[0, 2]) + # create CarteisanGrid of points + region = CartesianGrid2D([Polygon(bbox) for bbox in bboxes], dh, mask=poly_mask) + # get dims of 2d np.array + n_mag_bins = len(mws) + n_poly = region.num_nodes + # reshape rates into correct 2d format + rates = data[:,-2].reshape(n_poly, n_mag_bins) + # create / return class + if name is None: + name = os.path.basename(ascii_fname[:-4]) + gds = cls(start_date, end_date, magnitudes=mws, name=name, region=region, data=rates) + return gds
+ + +
+[docs] + def plot(self, ax=None, show=False, log=True, extent=None, set_global=False, plot_args=None): + """ Plot gridded forecast according to plate-carree projection + + Args: + show (bool): if true, show the figure. this call is blocking. + plot_args (optional/dict): dictionary containing plotting arguments for making figures + + Returns: + axes: matplotlib.Axes.axes + """ + # no mutable function arguments + if self.start_time is None or self.end_time is None: + time = 'forecast period' + else: + start = decimal_year(self.start_time) + end = decimal_year(self.end_time) + time = f'{round(end-start,3)} years' + + plot_args = plot_args or {} + plot_args.setdefault('figsize', (10, 10)) + plot_args.setdefault('title', self.name) + + # this call requires internet connection and basemap + if log: + plot_args.setdefault('clabel', f'log10 M{self.min_magnitude}+ rate per cell per {time}') + with numpy.errstate(divide='ignore'): + ax = plot_spatial_dataset(numpy.log10(self.spatial_counts(cartesian=True)), self.region, ax=ax, + show=show, extent=extent, set_global=set_global, plot_args=plot_args) + else: + plot_args.setdefault('clabel', f'M{self.min_magnitude}+ rate per cell per {time}') + ax = plot_spatial_dataset(self.spatial_counts(cartesian=True), self.region, ax=ax,show=show, extent=extent, + set_global=set_global, plot_args=plot_args) + return ax
+
+ + + +
+[docs] +class CatalogForecast(LoggingMixin): + """ Catalog based forecast defined as a family of stochastic event sets. """ + +
+[docs] + def __init__(self, filename=None, catalogs=None, name=None, + filter_spatial=False, filters=None, apply_mct=False, + region=None, expected_rates=None, start_time=None, end_time=None, + n_cat=None, event=None, loader=None, catalog_type='ascii', + catalog_format='native', store=True, apply_filters=False): + + + """ + The region information can be provided along side the data, if they are stored in one of the supported file formats. + It is assumed that the region for each data is identical. If the regions are not provided with the data files, + they must be provided explicitly. The california testing region can be loaded using :func:`csep.utils.spatial.california_relm_region`. + + There are a few different ways this class can be constructed, each + + The region is not required to load a forecast or to perform basic operations on a forecast, such as counting events. + Any binning of events in space or magnitude will require a spatial region or magnitude bin definitions, respectively. + + Args: + filename (str): Path to the file or directory containing the forecast. + catalogs: iterable of :class:`csep.core.catalogs.AbstractBaseCatalog` + filter_spatial (bool): if true, will filter to area defined in space region + apply_mct (bool): this should be provided if a time-dependent magnitude completeness model should be + applied to the forecast + filters (iterable): list of data filter strings. these override the filter_magnitude and filter_time arguments + region: :class:`csep.core.spatial.CartesianGrid2D` including magnitude bins + start_time (datetime.datetime): start time of the forecast + end_time (datetime.datetime): end time of the forecast + name (str): name of the forecast, will be used for defaults in plotting and other places + n_cat (int): number of catalogs in the forecast + event (:class:`csep.models.Event`): if the forecast is associated with a particular event + store (bool): if true, will store catalogs on object in memory. this should only be made false if working + with very large forecast files that cannot be stored in memory + apply_filters (bool): if true, filters will be applied automatically to the catalogs as the forecast + is iterated through + """ + + super().__init__() + + # used for labeling plots, filenames, etc, should be human readable + self.name = name + + # path to forecast location + self.filename = filename + + # should be iterable + self.catalogs = catalogs or [] + self._catalogs = [] + + # should be a generator function + self.loader = loader + + # used if the forecast is associated with a particular event + self.event = event + + # if false, no filters will be applied when iterating though forecast + self.apply_filters = apply_filters + + # these can be used to filter catalogs to a desired experiment region + self.filters = filters or [] + + self.filter_spatial = filter_spatial + self.apply_mct = apply_mct + + # data format used for loading catalogs + self.catalog_type = catalog_type + self.catalog_format = catalog_format + + # should be a MarkedGriddedDataSet + self.expected_rates = expected_rates + + self._event_counts = [] + + # defines the space, time, and magnitude region of the forecasts + self.region = region + + # start and end time of the forecast + self.start_time = start_time + self.end_time = end_time + + # stores catalogs in memory + self.store = store + + # time horizon in years + if self.start_time is not None and self.end_time is not None: + self.time_horizon_years = (self.end_epoch - self.start_epoch) / SECONDS_PER_ASTRONOMICAL_YEAR / 1000 + + # number of simulated catalogs + self.n_cat = n_cat + + # used to handle the iteration over catalogs + self._idx = 0 + + # load catalogs if catalogs aren't provided, this might be a generator + if not self.catalogs: + self._load_catalogs()
+ + + def __iter__(self): + return self + + def __next__(self): + """ Allows the class to be used in a for-loop. Handles the case where the catalogs are stored as a list or + loaded in using a generator function. The latter solves the problem where memory is a concern or all of the + catalogs should not be held in memory at once. """ + is_generator = True + try: + n_items = len(self.catalogs) + is_generator = False + assert self.n_cat == n_items + # here, we have reached the end of the list, simply reset the index to the front + if self._idx >= self.n_cat: + self._idx = 0 + raise StopIteration() + catalog = self.catalogs[self._idx] + self._idx += 1 + except TypeError: + # handle generator case. a generator does not have the __len__ attribute, but an iterable does. + try: + catalog = next(self.catalogs) + self._idx += 1 + except StopIteration: + # gets a new generator function after the old one is exhausted + if not self.store: + self.catalogs = self.loader(format=self.catalog_format, filename=self.filename, + region=self.region, name=self.name) + else: + self.catalogs = self._catalogs + del self._catalogs + if self.apply_filters: + self.apply_filters = False + + self.n_cat = self._idx + self._idx = 0 + raise StopIteration() + + # apply filtering to catalogs, these can throw errors if not configured properly + if self.apply_filters: + if self.filters: + catalog = catalog.filter(self.filters) + if self.apply_mct: + catalog = catalog.apply_mct(self.event.magnitude, datetime_to_utc_epoch(self.event.time)) + if self.filter_spatial: + catalog = catalog.filter_spatial(self.region) + + self._event_counts.append(catalog.event_count) + + if is_generator and self.store: + self._catalogs.append(catalog) + + # return potentially filtered data + return catalog + + def _load_catalogs(self): + self.catalogs = self.loader(format=self.catalog_format, filename=self.filename, region=self.region, name=self.name) + + @property + def start_epoch(self): + return datetime_to_utc_epoch(self.start_time) + + @property + def end_epoch(self): + return datetime_to_utc_epoch(self.end_time) + + @property + def magnitudes(self): + """ Returns left bin-edges of magnitude bins """ + return self.region.magnitudes + + @property + def min_magnitude(self): + """ Returns smallest magnitude bin edge of forecast """ + return numpy.min(self.region.magnitudes) + +
+[docs] + def spatial_counts(self, cartesian=False): + """ Returns the expected spatial counts from forecast """ + if self.expected_rates is None: + self.get_expected_rates() + return self.expected_rates.spatial_counts(cartesian=cartesian)
+ + +
+[docs] + def magnitude_counts(self): + """ Returns expected magnitude counts from forecast """ + if self.expected_rates is None: + self.get_expected_rates() + return self.expected_rates.magnitude_counts()
+ + + def get_event_counts(self, verbose=True): + """ Returns a numpy array containing the number of event counts for each catalog. + + Note: This function can take a while to compute if called without already iterating through a forecast that + is being stored on disk. This should only happen to large forecasts that have been initialized with + store = False. This should only happen on the first iteration of the catalog. + + Returns: + (numpy.array): event counts with size equal of catalogs in forecast + """ + if len(self._event_counts) == 0: + # event counts is filled while iterating over the catalog + t0 = time.time() + for i, _ in enumerate(self): + if verbose: + tens_exp = numpy.floor(numpy.log10(i + 1)) + if (i + 1) % 10 ** tens_exp == 0: + t1 = time.time() + print(f'Processed {i + 1} catalogs in {t1 - t0:.2f} seconds', flush=True) + pass + return numpy.array(self._event_counts) + +
+[docs] + def get_expected_rates(self, verbose=False): + """ Compute the expected rates in space-magnitude bins + + Args: + catalogs_iterable (iterable): collection of catalogs, should be filtered outside the function + data (csep.core.AbstractBaseCatalog): observation data + + Return: + :class:`csep.core.forecasts.GriddedForecast` + list of tuple(lon, lat, magnitude) events that were skipped in binning. if data was filtered in space + and magnitude beforehand this list shoudl be empty. + """ + # self.n_cat might be none here, if catalogs haven't been loaded and its not yet specified. + if self.region is None or self.region.magnitudes is None: + raise AttributeError("Forecast must have space-magnitude regions to compute expected rates.") + # need to compute expected rates, else return. + if self.expected_rates is None: + t0 = time.time() + data = numpy.empty([]) + for i, cat in enumerate(self): + # compute spatial density from each data, force data region to use the forecast region + cat.region = self.region + gridded_counts = cat.spatial_magnitude_counts() + if i == 0: + data = numpy.array(gridded_counts) + else: + data += numpy.array(gridded_counts) + # output status + if verbose: + tens_exp = numpy.floor(numpy.log10(i + 1)) + if (i + 1) % 10 ** tens_exp == 0: + t1 = time.time() + print(f'Processed {i + 1} catalogs in {t1 - t0:.3f} seconds', flush=True) + # after we iterate through the catalogs, we know self.n_cat + data = data / self.n_cat + self.expected_rates = GriddedForecast(self.start_time, self.end_time, data=data, region=self.region, + magnitudes=self.magnitudes, name=self.name) + return self.expected_rates
+ + + def plot(self, plot_args = None, verbose=True, **kwargs): + plot_args = plot_args or {} + if self.expected_rates is None: + self.get_expected_rates(verbose=verbose) + args_dict = {'title': self.name, + 'grid_labels': True, + 'grid': True, + 'borders': True, + 'feature_lw': 0.5, + 'basemap': 'ESRI_terrain', + } + args_dict.update(plot_args) + ax = self.expected_rates.plot(**kwargs, plot_args=args_dict) + return ax + +
+[docs] + def get_dataframe(self): + """Return a single dataframe with all of the events from all of the catalogs.""" + raise NotImplementedError("get_dataframe is not implemented.")
+ + +
+[docs] + def write_ascii(self, fname, header=True, loader=None ): + """ Writes data forecast to ASCII format + + + Args: + fname (str): Output filename of forecast + header (bool): If true, write header information; else, do not write header. + + + Returns: + NoneType + """ + raise NotImplementedError('write_ascii is not implemented!')
+ + +
+[docs] + @classmethod + def load_ascii(cls, fname, **kwargs): + """ Loads ASCII format for data forecast. + + Args: + fname (str): path to file or directory containing forecast files + + Returns: + :class:`csep.core.forecasts.CatalogForecast + """ + raise NotImplementedError("load_ascii is not implemented!")
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/csep/core/poisson_evaluations.html b/_modules/csep/core/poisson_evaluations.html new file mode 100644 index 00000000..6b0c167d --- /dev/null +++ b/_modules/csep/core/poisson_evaluations.html @@ -0,0 +1,885 @@ + + + + + + + + csep.core.poisson_evaluations — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep.core.poisson_evaluations

+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+
+import numpy
+import scipy.stats
+import scipy.spatial
+import warnings
+
+from csep.models import EvaluationResult
+from csep.utils.stats import poisson_joint_log_likelihood_ndarray
+from csep.core.exceptions import CSEPCatalogException
+
+
+
+[docs] +def paired_t_test(forecast, benchmark_forecast, observed_catalog, + alpha=0.05, scale=False): + """ Computes the t-test for gridded earthquake forecasts. + + This score is positively oriented, meaning that positive values of the information gain indicate that the + forecast is performing better than the benchmark forecast. + + Args: + forecast (csep.core.forecasts.GriddedForecast): nd-array storing gridded rates, axis=-1 should be the magnitude column + benchmark_forecast (csep.core.forecasts.GriddedForecast): nd-array storing gridded rates, axis=-1 should be the magnitude column + observed_catalog (csep.core.catalogs.AbstractBaseCatalog): number of observed earthquakes, should be whole number and >= zero. + alpha (float): tolerance level for the type-i error rate of the statistical test + scale (bool): if true, scale forecasted rates down to a single day + + Returns: + evaluation_result: csep.core.evaluations.EvaluationResult + """ + + # needs some pre-processing to put the forecasts in the context that is required for the t-test. this is different + # for cumulative forecasts (eg, multiple time-horizons) and static file-based forecasts. + target_event_rate_forecast1, n_fore1 = forecast.target_event_rates( + observed_catalog, scale=scale) + target_event_rate_forecast2, n_fore2 = benchmark_forecast.target_event_rates( + observed_catalog, scale=scale) + + # call the primative version operating on ndarray + out = _t_test_ndarray(target_event_rate_forecast1, + target_event_rate_forecast2, + observed_catalog.event_count, + n_fore1, n_fore2, alpha=alpha) + + # storing this for later + result = EvaluationResult() + result.name = 'Paired T-Test' + result.test_distribution = (out['ig_lower'], out['ig_upper']) + result.observed_statistic = out['information_gain'] + result.quantile = (out['t_statistic'], out['t_critical']) + result.sim_name = (forecast.name, benchmark_forecast.name) + result.obs_name = observed_catalog.name + result.status = 'normal' + result.min_mw = numpy.min(forecast.magnitudes) + return result
+ + + +
+[docs] +def w_test(gridded_forecast1, gridded_forecast2, observed_catalog, + scale=False): + """ Calculate the Single Sample Wilcoxon signed-rank test between two gridded forecasts. + + This test allows to test the null hypothesis that the median of Sample (X1(i)-X2(i)) is equal to a (N1-N2) / N_obs. + where, N1, N2 = Sum of expected values of Forecast_1 and Forecast_2, respectively. + + The Wilcoxon signed-rank test tests the null hypothesis that difference of Xi and Yi come from the same distribution. + In particular, it tests whether the distribution of the differences is symmetric around given mean. + + Parameters + + Args: + gridded_forecast1: Forecast of a model_1 (Grided) (Numpy Array) + A forecast has to be in terms of Average Number of Events in Each Bin + It can be anything greater than zero + + gridded_forecast2: Forecast of model_2 (Grided) (Numpy Array) + A forecast has to be in terms of Average Number of Events in Each Bin + It can be anything greater than zero + + observation: Observed (Grided) seismicity (Numpy Array): + An Observation has to be observed seismicity in each Bin + It has to be a either zero or positive integer only (No Floating Point) + + Returns + out: csep.core.evaluations.EvaluationResult + """ + + # needs some pre-processing to put the forecasts in the context that is required for the t-test. this is different + # for cumulative forecasts (eg, multiple time-horizons) and static file-based forecasts. + target_event_rate_forecast1, _ = gridded_forecast1.target_event_rates( + observed_catalog, scale=scale) + target_event_rate_forecast2, _ = gridded_forecast2.target_event_rates( + observed_catalog, scale=scale) + + N = observed_catalog.event_count # Sum of all the observed earthquakes + N1 = gridded_forecast1.event_count # Total number of Forecasted earthquakes by Model 1 + N2 = gridded_forecast2.event_count # Total number of Forecasted earthquakes by Model 2 + X1 = numpy.log( + target_event_rate_forecast1) # Log of every element of Forecast 1 + X2 = numpy.log( + target_event_rate_forecast2) # Log of every element of Forecast 2 + + # this ratio is the same as long as we scale all the forecasts and catalog rates by the same value + median_value = (N1 - N2) / N + + diff = X1 - X2 + + # w_test is One Sample Wilcoxon Signed Rank Test. It accepts the data only in 1D array. + x = diff.ravel() # Converting 2D Difference to 1D + + w_test_dic = _w_test_ndarray(x, median_value) + + # configure test result + result = EvaluationResult() + result.name = 'W-Test' + result.test_distribution = 'normal' + result.observed_statistic = w_test_dic['z_statistic'] + result.quantile = w_test_dic['probability'] + result.sim_name = (gridded_forecast1.name, gridded_forecast2.name) + result.obs_name = observed_catalog.name + result.status = 'normal' + result.min_mw = numpy.min(gridded_forecast1.magnitudes) + return result
+ + + +
+[docs] +def number_test(gridded_forecast, observed_catalog): + """Computes "N-Test" on a gridded forecast. + author: @asim + + Computes Number (N) test for Observed and Forecasts. Both data sets are expected to be in terms of event counts. + We find the Total number of events in Observed Catalog and Forecasted Catalogs. Which are then employed to compute the probablities of + (i) At least no. of events (delta 1) + (ii) At most no. of events (delta 2) assuming the poissonian distribution. + + Args: + observation: Observed (Gridded) seismicity (Numpy Array): + An Observation has to be Number of Events in Each Bin + It has to be a either zero or positive integer only (No Floating Point) + forecast: Forecast of a Model (Gridded) (Numpy Array) + A forecast has to be in terms of Average Number of Events in Each Bin + It can be anything greater than zero + + Returns: + out (tuple): (delta_1, delta_2) + """ + result = EvaluationResult() + + # observed count + obs_cnt = observed_catalog.event_count + + # forecasts provide the expeceted number of events during the time horizon of the forecast + fore_cnt = gridded_forecast.event_count + + epsilon = 1e-6 + + # stores the actual result of the number test + delta1, delta2 = _number_test_ndarray(fore_cnt, obs_cnt, epsilon=epsilon) + + # store results + result.test_distribution = ('poisson', fore_cnt) + result.name = 'Poisson N-Test' + result.observed_statistic = obs_cnt + result.quantile = (delta1, delta2) + result.sim_name = gridded_forecast.name + result.obs_name = observed_catalog.name + result.status = 'normal' + result.min_mw = numpy.min(gridded_forecast.magnitudes) + + return result
+ + + +
+[docs] +def conditional_likelihood_test(gridded_forecast, observed_catalog, + num_simulations=1000, seed=None, + random_numbers=None, verbose=False): + """Performs the conditional likelihood test on Gridded Forecast using an Observed Catalog. + + This test normalizes the forecast so the forecasted rate are consistent with the observations. This modification + eliminates the strong impact differences in the number distribution have on the forecasted rates. + + Note: The forecast and the observations should be scaled to the same time period before calling this function. This increases + transparency as no assumptions are being made about the length of the forecasts. This is particularly important for + gridded forecasts that supply their forecasts as rates. + + Args: + gridded_forecast: csep.core.forecasts.GriddedForecast + observed_catalog: csep.core.catalogs.Catalog + num_simulations (int): number of simulations used to compute the quantile score + seed (int): used fore reproducibility, and testing + random_numbers (numpy.ndarray): random numbers used to override the random number generation. injection point for testing. + + Returns: + evaluation_result: csep.core.evaluations.EvaluationResult + """ + + # grid catalog onto spatial grid + try: + _ = observed_catalog.region.magnitudes + except CSEPCatalogException: + observed_catalog.region = gridded_forecast.region + + gridded_catalog_data = observed_catalog.spatial_magnitude_counts() + + # simply call likelihood test on catalog data and forecast + qs, obs_ll, simulated_ll = _poisson_likelihood_test(gridded_forecast.data, + gridded_catalog_data, + num_simulations=num_simulations, + seed=seed, + random_numbers=random_numbers, + use_observed_counts=True, + verbose=verbose, + normalize_likelihood=False) + + # populate result data structure + result = EvaluationResult() + result.test_distribution = simulated_ll + result.name = 'Poisson CL-Test' + result.observed_statistic = obs_ll + result.quantile = qs + result.sim_name = gridded_forecast.name + result.obs_name = observed_catalog.name + result.status = 'normal' + result.min_mw = numpy.min(gridded_forecast.magnitudes) + + return result
+ + + +def poisson_spatial_likelihood(forecast, catalog): + """ + This function computes the observed log-likehood score obtained by a gridded forecast in each cell, given a + seismicity catalog. In this case, we assume a Poisson distribution of earthquakes, so that the likelihood of + observing an event w given the expected value x in each cell is: + poll = -x + wlnx - ln(w!) + + Args: + forecast: gridded forecast + catalog: observed catalog + + Returns: + poll: Poisson-based log-likelihood scores obtained by the forecast in each spatial cell. + + Notes: + log(w!) = 0 + factorial(n) = loggamma(n+1) + """ + + scale = catalog.event_count / forecast.event_count + + first_term = -forecast.spatial_counts() * scale + second_term = catalog.spatial_counts() * numpy.log( + forecast.spatial_counts() * scale) + third_term = -scipy.special.loggamma(catalog.spatial_counts() + 1) + + poll = first_term + second_term + third_term + + return poll + + +def binary_spatial_likelihood(forecast, catalog): + """ + This function computes log-likelihood scores (bills), using a binary likelihood distribution of earthquakes. + For this aim, we need an input variable 'forecast' and an variable 'catalog' + + This function computes the observed log-likehood score obtained by a gridded forecast in each cell, given a + seismicity catalog. In this case, we assume a binary distribution of earthquakes, so that the likelihood of + observing an event w given the expected value x in each cell is:' + bill = (1-X) * ln(exp(-λ)) + X * ln(1 - exp(-λ)), with X=1 if earthquake and X=0 if no earthquake. + + Args: + forecast: gridded forecast + catalog: observed catalog + + Returns: + bill: Binary-based log-likelihood scores obtained by the forecast in each spatial cell. + """ + + scale = catalog.event_count / forecast.event_count + target_idx = numpy.nonzero(catalog.spatial_counts()) + X = numpy.zeros(forecast.spatial_counts().shape) + X[target_idx[0]] = 1 + + # First, we estimate the log-likelihood in cells where no events are observed: + first_term = (1 - X) * (-forecast.spatial_counts() * scale) + + # Then, we compute the log-likelihood of observing one or more events given a Poisson distribution, i.e., 1 - Pr(0): + second_term = X * ( + numpy.log(1.0 - numpy.exp(-forecast.spatial_counts() * scale))) + + # Finally, we sum both terms to compute log-likelihood score in each spatial cell: + bill = first_term + second_term + + return bill + + +
+[docs] +def magnitude_test(gridded_forecast, observed_catalog, num_simulations=1000, + seed=None, random_numbers=None, + verbose=False): + """ + Performs the Magnitude Test on a Gridded Forecast using an observed catalog. + + Note: The forecast and the observations should be scaled to the same time period before calling this function. This increases + transparency as no assumptions are being made about the length of the forecasts. This is particularly important for + gridded forecasts that supply their forecasts as rates. + + Args: + gridded_forecast: csep.core.forecasts.GriddedForecast + observed_catalog: csep.core.catalogs.Catalog + num_simulations (int): number of simulations used to compute the quantile score + seed (int): used fore reproducibility, and testing + random_numbers (numpy.ndarray): random numbers used to override the random number generation. injection point for testing. + + Returns: + evaluation_result: csep.core.evaluations.EvaluationResult + """ + + # grid catalog onto spatial grid + gridded_catalog_data = observed_catalog.magnitude_counts( + mag_bins=gridded_forecast.magnitudes) + + # simply call likelihood test on catalog data and forecast + qs, obs_ll, simulated_ll = _poisson_likelihood_test( + gridded_forecast.magnitude_counts(), gridded_catalog_data, + num_simulations=num_simulations, + seed=seed, + random_numbers=random_numbers, + use_observed_counts=True, + verbose=verbose, + normalize_likelihood=True) + + # populate result data structure + result = EvaluationResult() + result.test_distribution = simulated_ll + result.name = 'Poisson M-Test' + result.observed_statistic = obs_ll + result.quantile = qs + result.sim_name = gridded_forecast.name + result.obs_name = observed_catalog.name + result.status = 'normal' + result.min_mw = numpy.min(gridded_forecast.magnitudes) + + return result
+ + + +
+[docs] +def spatial_test(gridded_forecast, observed_catalog, num_simulations=1000, + seed=None, random_numbers=None, + verbose=False): + """ + Performs the Spatial Test on the Forecast using the Observed Catalogs. + + Note: The forecast and the observations should be scaled to the same time period before calling this function. This increases + transparency as no assumptions are being made about the length of the forecasts. This is particularly important for + gridded forecasts that supply their forecasts as rates. + + Args: + gridded_forecast: csep.core.forecasts.GriddedForecast + observed_catalog: csep.core.catalogs.Catalog + num_simulations (int): number of simulations used to compute the quantile score + seed (int): used fore reproducibility, and testing + random_numbers (numpy.ndarray): random numbers used to override the random number generation. injection point for testing. + + Returns: + evaluation_result: csep.core.evaluations.EvaluationResult + """ + + gridded_catalog_data = observed_catalog.spatial_counts() + + # simply call likelihood test on catalog data and forecast + qs, obs_ll, simulated_ll = _poisson_likelihood_test( + gridded_forecast.spatial_counts(), gridded_catalog_data, + num_simulations=num_simulations, + seed=seed, + random_numbers=random_numbers, + use_observed_counts=True, + verbose=verbose, + normalize_likelihood=True) + + # populate result data structure + result = EvaluationResult() + result.test_distribution = simulated_ll + result.name = 'Poisson S-Test' + result.observed_statistic = obs_ll + result.quantile = qs + result.sim_name = gridded_forecast.name + result.obs_name = observed_catalog.name + result.status = 'normal' + try: + result.min_mw = numpy.min(gridded_forecast.magnitudes) + except AttributeError: + result.min_mw = -1 + return result
+ + + +
+[docs] +def likelihood_test(gridded_forecast, observed_catalog, num_simulations=1000, + seed=None, random_numbers=None, + verbose=False): + """ + Performs the likelihood test on Gridded Forecast using an Observed Catalog. + + Note: The forecast and the observations should be scaled to the same time period before calling this function. This increases + transparency as no assumptions are being made about the length of the forecasts. This is particularly important for + gridded forecasts that supply their forecasts as rates. + + Args: + gridded_forecast: csep.core.forecasts.GriddedForecast + observed_catalog: csep.core.catalogs.Catalog + num_simulations (int): number of simulations used to compute the quantile score + seed (int): used fore reproducibility, and testing + random_numbers (numpy.ndarray): random numbers used to override the random number generation. + injection point for testing. + + Returns: + evaluation_result: csep.core.evaluations.EvaluationResult + """ + + # grid catalog onto spatial grid + # grid catalog onto spatial grid + try: + _ = observed_catalog.region.magnitudes + except CSEPCatalogException: + observed_catalog.region = gridded_forecast.region + + gridded_catalog_data = observed_catalog.spatial_magnitude_counts() + + # simply call likelihood test on catalog and forecast + qs, obs_ll, simulated_ll = _poisson_likelihood_test(gridded_forecast.data, + gridded_catalog_data, + num_simulations=num_simulations, + seed=seed, + random_numbers=random_numbers, + use_observed_counts=False, + verbose=verbose, + normalize_likelihood=False) + + # populate result data structure + result = EvaluationResult() + result.test_distribution = simulated_ll + result.name = 'Poisson L-Test' + result.observed_statistic = obs_ll + result.quantile = qs + result.sim_name = gridded_forecast.name + result.obs_name = observed_catalog.name + result.status = 'normal' + result.min_mw = numpy.min(gridded_forecast.magnitudes) + + return result
+ + + +def _number_test_ndarray(fore_cnt, obs_cnt, epsilon=1e-6): + """ Computes delta1 and delta2 values from the csep1 number test. + + Args: + fore_cnt (float): parameter of poisson distribution coming from expected value of the forecast + obs_cnt (float): count of earthquakes observed during the testing period. + epsilon (float): tolerance level to satisfy the requirements of two-sided p-value + + Returns + result (tuple): (delta1, delta2) + """ + delta1 = 1.0 - scipy.stats.poisson.cdf(obs_cnt - epsilon, fore_cnt) + delta2 = scipy.stats.poisson.cdf(obs_cnt + epsilon, fore_cnt) + return delta1, delta2 + + +def _t_test_ndarray(target_event_rates1, target_event_rates2, n_obs, n_f1, + n_f2, alpha=0.05): + """ Computes T test statistic by comparing two target event rate distributions. + + We compare Forecast from Model 1 and with Forecast of Model 2. Information Gain is computed, which is then employed + to compute T statistic. Confidence interval of Information Gain can be computed using T_critical. For a complete explanation + see Rhoades, D. A., et al., (2011). Efficient testing of earthquake forecasting models. Acta Geophysica, 59(4), 728-747. + doi:10.2478/s11600-011-0013-5 + + Args: + target_event_rates1 (numpy.ndarray): nd-array storing target event rates + target_event_rates2 (numpy.ndarray): nd-array storing target event rates + n_obs (float, int, numpy.ndarray): number of observed earthquakes, should be whole number and >= zero. + alpha (float): tolerance level for the type-i error rate of the statistical test + + Returns: + out (dict): relevant statistics from the t-test + + """ + # Some Pre Calculations - Because they are being used repeatedly. + N = n_obs # Total number of observed earthquakes + N1 = n_f1 # Total number of forecasted earthquakes by Model 1 + N2 = n_f2 # Total number of forecasted earthquakes by Model 2 + X1 = numpy.log(target_event_rates1) # Log of every element of Forecast 1 + X2 = numpy.log(target_event_rates2) # Log of every element of Forecast 2 + + # Information Gain, using Equation (17) of Rhoades et al. 2011 + information_gain = (numpy.sum(X1 - X2) - (N1 - N2)) / N + + # Compute variance of (X1-X2) using Equation (18) of Rhoades et al. 2011 + first_term = (numpy.sum(numpy.power((X1 - X2), 2))) / (N - 1) + second_term = numpy.power(numpy.sum(X1 - X2), 2) / (numpy.power(N, 2) - N) + forecast_variance = first_term - second_term + + forecast_std = numpy.sqrt(forecast_variance) + t_statistic = information_gain / (forecast_std / numpy.sqrt(N)) + + # Obtaining the Critical Value of T from T distribution. + df = N - 1 + t_critical = scipy.stats.t.ppf(1 - (alpha / 2), + df) # Assuming 2-Tail Distribution for 2 tail, divide 0.05/2. + + # Computing Information Gain Interval. + ig_lower = information_gain - (t_critical * forecast_std / numpy.sqrt(N)) + ig_upper = information_gain + (t_critical * forecast_std / numpy.sqrt(N)) + + # If T value greater than T critical, Then both Lower and Upper Confidence Interval limits will be greater than Zero. + # If above Happens, Then It means that Forecasting Model 1 is better than Forecasting Model 2. + return {'t_statistic': t_statistic, + 't_critical': t_critical, + 'information_gain': information_gain, + 'ig_lower': ig_lower, + 'ig_upper': ig_upper} + + +def _w_test_ndarray(x, m=0): + """ Calculate the Single Sample Wilcoxon signed-rank test for an ndarray. + + This method is based on collecting a number of samples from a population with unknown median, m. + The Wilcoxon One Sample Signed-Rank test is the non parametric version of the t-test. + It is based on ranks and because of that, the location parameter is not here the mean but the median. + This test allows to test the null hypothesis that the sample median is equal to a given value provided by the user. + If we designate m to be the assumed median of the sample: + Null hypothesis (simplified): The population from which the data were sampled is symmetric about the Given value (m). + Alternative hypothesis (simplified, two-sided): The population from which the data were sampled is not symmetric around m. + + Args: + x: 1D vector of paired differences. + m: Designated median value. + + Returns: + dict: {'z_statistic': Value of Z statistic, considering two-side test, + 'probablity': Probablity value } + """ + # compute median differences + d = x - m + + # remove zero values + d = numpy.compress(numpy.not_equal(d, 0), d, axis=-1) + + count = len(d) + if count < 10: + warnings.warn("Sample size too small for normal approximation.") + + # compute ranks + r = scipy.stats.rankdata(abs(d)) + r_plus = numpy.sum((d > 0) * r, axis=0) + r_minus = numpy.sum((d < 0) * r, axis=0) + + # for "two-sided", choose minimum of both + t = min(r_plus, r_minus) + + # Correction to be introduced + mn = count * (count + 1.) * 0.25 + se = count * (count + 1.) * (2. * count + 1.) + + replist, repnum = scipy.stats.find_repeats(r) + if repnum.size != 0: + # Correction for repeated elements. + se -= 0.5 * (repnum * (repnum * repnum - 1)).sum() + + se = numpy.sqrt(se / 24) + + # compute statistic and p-value using normal approximation + # z = (T - mn - d) / se Continuity correction. But We are not considering continuity correction. + z = (t - mn) / se + + # 2, is multiplied for "two-sided" distribution + prob = 2. * scipy.stats.distributions.norm.sf(abs(z)) + + # Accept the NULL Hypothesis [Median(Xi-Yi) = Given value]. If probability is greater than 0.05 + # If Probability is smaller than 0.05, Reject the NULL Hypothesis, that Median(Xi-Yi) != Given Value + w_test_eval = {'z_statistic': z, + 'probability': prob} + + return w_test_eval + + +def _simulate_catalog(num_events, sampling_weights, sim_fore, + random_numbers=None): + # generate uniformly distributed random numbers in [0,1), this + if random_numbers is None: + random_numbers = numpy.random.rand(num_events) + else: + # TODO: ensure that random numbers are all between 0 and 1. + pass + + # reset simulation array to zero, but don't reallocate + sim_fore.fill(0) + + # find insertion points using binary search inserting to satisfy a[i-1] <= v < a[i] + pnts = numpy.searchsorted(sampling_weights, random_numbers, side='right') + + # create simulated catalog by adding to the original locations + numpy.add.at(sim_fore, pnts, 1) + assert sim_fore.sum() == num_events, "simulated the wrong number of events!" + + return sim_fore + + +def _poisson_likelihood_test(forecast_data, observed_data, + num_simulations=1000, random_numbers=None, + seed=None, use_observed_counts=True, verbose=True, + normalize_likelihood=False): + """ + Computes the likelihood-test from CSEP using an efficient simulation based approach. + Args: + forecast_data (numpy.ndarray): nd array where [:, -1] are the magnitude bins. + observed_data (numpy.ndarray): same format as observation. + num_simulations: default number of simulations to use for likelihood based simulations + seed: used for reproducibility of the prng + random_numbers (numpy.ndarray): can supply an explicit list of random numbers, primarily used for software testing + use_observed_counts (bool): if true, will simulate catalogs using the observed events, if false will draw from poisson distribution + verbose (bool): if true, write progress of test to command line + normalize_likelihood (bool): if true, normalize likelihood. used by deafult for magnitude and spatial tests + """ + + # set seed for the likelihood test + if seed is not None: + numpy.random.seed(seed) + + # used to determine where simulated earthquake should be placed, by definition of cumsum these are sorted + sampling_weights = numpy.cumsum(forecast_data.ravel()) / numpy.sum( + forecast_data) + + # data structures to store results + sim_fore = numpy.zeros(sampling_weights.shape) + simulated_ll = [] + + # properties of observations and forecasts + n_obs = numpy.sum(observed_data) + n_fore = numpy.sum(forecast_data) + + expected_forecast_count = numpy.sum(forecast_data) + log_bin_expectations = numpy.log(forecast_data.ravel()) + # used for conditional-likelihood, magnitude, and spatial tests to normalize the rate-component of the forecasts + if use_observed_counts and normalize_likelihood: + scale = n_obs / n_fore + expected_forecast_count = int(n_obs) + log_bin_expectations = numpy.log(forecast_data.ravel() * scale) + + # gets the 1d indices to bins that contain target events, these indexes perform copies and not views into the array + target_idx = numpy.nonzero(observed_data.ravel()) + + # note for performance: these operations perform copies + observed_data_nonzero = observed_data.ravel()[target_idx] + target_event_forecast = log_bin_expectations[ + target_idx] * observed_data_nonzero + + # main simulation step in this loop + for idx in range(num_simulations): + if use_observed_counts: + num_events_to_simulate = int(n_obs) + else: + num_events_to_simulate = int( + numpy.random.poisson(expected_forecast_count)) + + if random_numbers is None: + sim_fore = _simulate_catalog(num_events_to_simulate, + sampling_weights, sim_fore) + else: + sim_fore = _simulate_catalog(num_events_to_simulate, + sampling_weights, sim_fore, + random_numbers=random_numbers[idx, :]) + + # compute joint log-likelihood from simulation by leveraging that only cells with target events contribute to likelihood + sim_target_idx = numpy.nonzero(sim_fore) + sim_obs_nonzero = sim_fore[sim_target_idx] + sim_target_event_forecast = log_bin_expectations[ + sim_target_idx] * sim_obs_nonzero + + # compute joint log-likelihood + current_ll = poisson_joint_log_likelihood_ndarray( + sim_target_event_forecast, sim_obs_nonzero, + expected_forecast_count) + + # append to list of simulated log-likelihoods + simulated_ll.append(current_ll) + + # just be verbose + if verbose: + if (idx + 1) % 100 == 0: + print(f'... {idx + 1} catalogs simulated.') + + # observed joint log-likelihood + obs_ll = poisson_joint_log_likelihood_ndarray(target_event_forecast, + observed_data_nonzero, + expected_forecast_count) + + # quantile score + qs = numpy.sum(simulated_ll <= obs_ll) / num_simulations + + # float, float, list + return qs, obs_ll, simulated_ll +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/csep/core/regions.html b/_modules/csep/core/regions.html new file mode 100644 index 00000000..7fe0f7cc --- /dev/null +++ b/_modules/csep/core/regions.html @@ -0,0 +1,1610 @@ + + + + + + + + csep.core.regions — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep.core.regions

+# Python imports
+import itertools
+import os
+from itertools import compress
+from xml.etree import ElementTree as ET
+
+# Third-party imports
+import numpy
+import numpy as np
+import mercantile
+from shapely import geometry
+from shapely.ops import unary_union
+
+# PyCSEP imports
+from csep.utils.calc import bin1d_vec, cleaner_range, first_nonnan, last_nonnan
+from csep.utils.scaling_relationships import WellsAndCoppersmith
+
+from csep.models import Polygon
+
+def california_relm_collection_region(dh_scale=1, magnitudes=None, name="relm-california-collection", use_midpoint=True):
+    """ Return collection region for California RELM testing region
+
+    Args:
+        dh_scale (int): factor of two multiple to change the grid size
+        mangitudes (array-like): array representing the lower bin edges of the magnitude bins
+        name (str): human readable identifer
+        use_midpoints (bool): if true, treat values in file as midpoints. default = true.
+
+    Returns:
+        :class:`csep.core.spatial.CartesianGrid2D`
+
+    Raises:
+        ValueError: dh_scale must be a factor of two
+
+    """
+    if dh_scale % 2 != 0 and dh_scale != 1:
+        raise ValueError("dh_scale must be a factor of two or dh_scale must equal unity.")
+
+    # we can hard-code the dh because we hard-code the filename
+    dh = 0.1
+    root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
+    filepath = os.path.join(root_dir, 'artifacts', 'Regions', 'RELMCollectionArea.dat')
+    points = numpy.loadtxt(filepath)
+    if use_midpoint:
+        origins = numpy.array(points) - dh / 2
+    else:
+        origins = numpy.array(points)
+
+    if dh_scale > 1:
+        origins = increase_grid_resolution(origins, dh, dh_scale)
+        dh = dh / dh_scale
+
+    # turn points into polygons and make region object
+    bboxes = compute_vertices(origins, dh)
+    relm_region = CartesianGrid2D([Polygon(bbox) for bbox in bboxes], dh, name=name)
+
+    if magnitudes is not None:
+        relm_region.magnitudes = magnitudes
+
+    return relm_region
+
+
+[docs] +def california_relm_region(dh_scale=1, magnitudes=None, name="relm-california", use_midpoint=True): + """ + Returns class representing California testing region. + + This region can + be used to create gridded datasets for earthquake forecasts. The XML file appears to use the + midpoint, and the .dat file uses the origin in the "lower left" corner. + + Args: + dh_scale: can resample this grid by factors of 2 + + Returns: + :class:`csep.core.spatial.CartesianGrid2D` + + Raises: + ValueError: dh_scale must be a factor of two + + """ + + if dh_scale % 2 != 0 and dh_scale != 1: + raise ValueError("dh_scale must be a factor of two or dh_scale must equal unity.") + + # use default file path from python package + root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + filepath = os.path.join(root_dir, 'artifacts', 'Regions', 'csep-forecast-template-M5.xml') + csep_template = os.path.expanduser(filepath) + points, dh = parse_csep_template(csep_template) + if use_midpoint: + origins = numpy.array(points) - dh / 2 + else: + origins = numpy.array(points) + + if dh_scale > 1: + origins = increase_grid_resolution(origins, dh, dh_scale) + dh = dh / dh_scale + + # turn points into polygons and make region object + bboxes = compute_vertices(origins, dh) + relm_region = CartesianGrid2D([Polygon(bbox) for bbox in bboxes], dh, name=name) + + if magnitudes is not None: + relm_region.magnitudes = magnitudes + + return relm_region
+ + +
+[docs] +def italy_csep_region(dh_scale=1, magnitudes=None, name="csep-italy", use_midpoint=True): + """ + Returns class representing Italian testing region. + + This region can be used to create gridded datasets for earthquake forecasts. The region is defined by the + file 'forecast.italy.M5.xml' and contains a spatially gridded region with 0.1° x 0.1° cells. + + Args: + dh_scale: can resample this grid by factors of 2 + magnitudes (array-like): bin edges for magnitudes. if provided, will be bound to the output region class. + this argument provides a short-cut for creating space-magnitude regions. + name (str): human readable identify given to the region + use_midpoint (bool): if true, treat values in file as midpoints. default = true. + + Returns: + :class:`csep.core.spatial.CartesianGrid2D` + + Raises: + ValueError: dh_scale must be a factor of two + + """ + if dh_scale % 2 != 0 and dh_scale != 1: + raise ValueError("dh_scale must be a factor of two or dh_scale must equal unity.") + + # use default file path from python package + root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + filepath = os.path.join(root_dir, 'artifacts', 'Regions', 'forecast.italy.M5.xml') + csep_template = os.path.expanduser(filepath) + points, dh = parse_csep_template(csep_template) + if use_midpoint: + origins = numpy.array(points) - dh / 2 + else: + origins = numpy.array(points) + + + if dh_scale > 1: + origins = increase_grid_resolution(origins, dh, dh_scale) + dh = dh / dh_scale + + # turn points into polygons and make region object + bboxes = compute_vertices(origins, dh) + italy_region = CartesianGrid2D([Polygon(bbox) for bbox in bboxes], dh, name=name) + + if magnitudes is not None: + italy_region.magnitudes = magnitudes + + return italy_region
+ + +def italy_csep_collection_region(dh_scale=1, magnitudes=None, name="csep-italy-collection", use_midpoint=True): + """ Return collection region for Italy CSEP collection region + + Args: + dh_scale (int): factor of two multiple to change the grid size + mangitudes (array-like): array representing the lower bin edges of the magnitude bins + name (str): human readable identifer + use_midpoint (bool): if true, treat values in file as midpoints. default = true. + + Returns: + :class:`csep.core.spatial.CartesianGrid2D` + + Raises: + ValueError: dh_scale must be a factor of two + """ + if dh_scale % 2 != 0 and dh_scale != 1: + raise ValueError("dh_scale must be a factor of two or dh_scale must equal unity.") + + # we can hard-code the dh because we hard-code the filename + dh = 0.1 + root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + filepath = os.path.join(root_dir, 'artifacts', 'Regions', 'italy.collection.nodes.dat') + points = numpy.loadtxt(filepath) + if use_midpoint: + origins = numpy.array(points) - dh / 2 + else: + origins = numpy.array(points) + + + if dh_scale > 1: + origins = increase_grid_resolution(origins, dh, dh_scale) + dh = dh / dh_scale + + # turn points into polygons and make region object + bboxes = compute_vertices(origins, dh) + relm_region = CartesianGrid2D([Polygon(bbox) for bbox in bboxes], dh, name=name) + + if magnitudes is not None: + relm_region.magnitudes = magnitudes + + return relm_region + +def nz_csep_region(dh_scale=1, magnitudes=None, name="csep-nz", use_midpoint=True): + """ Return collection region for the New Zealand CSEP testing region + + Args: + dh_scale (int): factor of two multiple to change the grid size + mangitudes (array-like): array representing the lower bin edges of the magnitude bins + name (str): human readable identifer + use_midpoints (bool): if true, treat values in file as midpoints. default = true. + + Returns: + :class:`csep.core.spatial.CartesianGrid2D` + + Raises: + ValueError: dh_scale must be a factor of two + + """ + if dh_scale % 2 != 0 and dh_scale != 1: + raise ValueError("dh_scale must be a factor of two or dh_scale must equal unity.") + + # we can hard-code the dh because we hard-code the filename + dh = 0.1 + root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + filepath = os.path.join(root_dir, 'artifacts', 'Regions', 'nz.testing.nodes.dat') + points = numpy.loadtxt(filepath) + if use_midpoint: + origins = numpy.array(points) - dh / 2 + else: + origins = numpy.array(points) + + if dh_scale > 1: + origins = increase_grid_resolution(origins, dh, dh_scale) + dh = dh / dh_scale + + # turn points into polygons and make region object + bboxes = compute_vertices(origins, dh) + nz_region = CartesianGrid2D([Polygon(bbox) for bbox in bboxes], dh, name=name) + + if magnitudes is not None: + nz_region.magnitudes = magnitudes + + return nz_region + +def nz_csep_collection_region(dh_scale=1, magnitudes=None, name="csep-nz-collection", use_midpoint=True): + """ Return collection region for the New Zealand CSEP collection region + + Args: + dh_scale (int): factor of two multiple to change the grid size + mangitudes (array-like): array representing the lower bin edges of the magnitude bins + name (str): human readable identifer + use_midpoints (bool): if true, treat values in file as midpoints. default = true. + + Returns: + :class:`csep.core.spatial.CartesianGrid2D` + + Raises: + ValueError: dh_scale must be a factor of two + + """ + if dh_scale % 2 != 0 and dh_scale != 1: + raise ValueError("dh_scale must be a factor of two or dh_scale must equal unity.") + + # we can hard-code the dh because we hard-code the filename + dh = 0.1 + root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + filepath = os.path.join(root_dir, 'artifacts', 'Regions', 'nz.collection.nodes.dat') + points = numpy.loadtxt(filepath) + if use_midpoint: + origins = numpy.array(points) - dh / 2 + else: + origins = numpy.array(points) + + if dh_scale > 1: + origins = increase_grid_resolution(origins, dh, dh_scale) + dh = dh / dh_scale + + # turn points into polygons and make region object + bboxes = compute_vertices(origins, dh) + nz_collection_region = CartesianGrid2D([Polygon(bbox) for bbox in bboxes], dh, name=name) + + if magnitudes is not None: + nz_collection_region.magnitudes = magnitudes + + return nz_collection_region + +
+[docs] +def global_region(dh=0.1, name="global", magnitudes=None): + """ Creates a global region used for evaluating gridded forecasts on the global scale. + + The gridded region corresponds to the + + Args: + dh: + + Returns: + csep.utils.CartesianGrid2D: + """ + # generate latitudes + + lons = cleaner_range(-180.0, 179.9, dh) + lats = cleaner_range(-90, 89.9, dh) + coords = itertools.product(lons,lats) + region = CartesianGrid2D([Polygon(bbox) for bbox in compute_vertices(coords, dh)], dh, name=name) + if magnitudes is not None: + region.magnitudes = magnitudes + return region
+ + +
+[docs] +def magnitude_bins(start_magnitude, end_magnitude, dmw): + """ Returns array holding magnitude bin edges. + + The output from this function is monotonically increasing and equally spaced bin edges that can represent magnitude + bins. + + Args: + start_magnitude (float) + end_magnitude (float) + dmw (float): magnitude spacing + + Returns: + bin_edges (numpy.ndarray) + """ + # convert to integers to prevent accumulating floating point errors + const = 10000 + start = numpy.floor(const * start_magnitude) + end = numpy.floor(const * end_magnitude) + d = const * dmw + return numpy.arange(start, end + d / 2, d) / const
+ + +
+[docs] +def create_space_magnitude_region(region, magnitudes): + """Simple wrapper to create space-magnitude region """ + if not (isinstance(region, CartesianGrid2D) or isinstance(region, QuadtreeGrid2D)) : + raise TypeError("region must be CartesianGrid2D") + # bind to region class + if magnitudes is None: + raise ValueError("magnitudes should not be None if creating space-magnitude region.") + region.magnitudes = magnitudes + region.num_mag_bins = len(region.magnitudes) + return region
+ + +
+[docs] +def parse_csep_template(xml_filename): + """ + Reads CSEP XML template file and returns the lat/lon values + for the forecast. + + Returns: + list of tuples where tuple is (lon, lat) + """ + tree = ET.parse(xml_filename) + root = tree.getroot() + points = [] + for cell in root.iter('{http://www.scec.org/xml-ns/csep/forecast/0.1}cell'): + points.append((float(cell.attrib['lon']), float(cell.attrib['lat']))) + + # get cell spacing + data = root.find('{http://www.scec.org/xml-ns/csep/forecast/0.1}forecastData') + dh_elem = data.find('{http://www.scec.org/xml-ns/csep/forecast/0.1}defaultCellDimension') + dh_lat = float(dh_elem.attrib['latRange']) + dh_lon = float(dh_elem.attrib['lonRange']) + + if not numpy.isclose(dh_lat, dh_lon): + raise ValueError("dh_lat must equal dh_lon. grid needs to be regular.") + + return points, dh_lat
+ + +
+[docs] +def increase_grid_resolution(points, dh, factor): + """ + Takes a set of origin points and returns a new set with higher grid resolution. assumes the origin point is in the + lower left corner. the new dh is dh / factor. This implementation requires that the decimation factor be a multiple of 2. + + Args: + points: list of (lon,lat) tuples + dh: old grid spacing + factor: amount to reduce + + Returns: + points: list of (lon,lat) tuples with spacing dh / scale + + """ + # short-circuit recursion + if factor == 1: + return points + + # handle edge cases + assert factor % 2 == 0 + assert factor >= 1 + + # first start out + new_points = set() + new_dh = dh / 2 + for point in points: + bbox = compute_vertex(point, new_dh) + for pnt in bbox: + new_points.add(pnt) + # call function again with new_points, new_dh, new_factor + new_factor = factor / 2 + return increase_grid_resolution(list(new_points), new_dh, new_factor)
+ + +
+[docs] +def masked_region(region, polygon): + """ + Build a new region based off the coordinates in the polygon. + + Args: + region: CartesianGrid2D object + polygon: Polygon object + + Returns: + new_region: CartesianGrid2D object + """ + # contains is true if spatial cell in region is inside the polygon + contains = polygon.contains(region.midpoints()) + # compress only returns elements that are true, effectively removing elements outside of the polygons + new_polygons = list(compress(region.polygons, contains)) + # create new region with the spatial cells inside the polygon + return CartesianGrid2D(new_polygons, region.dh)
+ + +
+[docs] +def generate_aftershock_region(mainshock_mw, mainshock_lon, mainshock_lat, num_radii=3, region=california_relm_region, **kwargs): + """ Creates a spatial region around a given epicenter + + The method uses the Wells and Coppersmith scaling relationship to determine the average fault length and creates a + circular region centered at (mainshock_lon, mainshock_lat) with radius = num_radii. + + Args: + mainshock_mw (float): magnitude of mainshock + mainshock_lon (float): epicentral longitude + mainshock_lat (float): epicentral latitude + num_radii (float/int): number of radii of circular region + region (callable): returns :class:`csep.utils.spatial.CartesianGrid2D` + **kwargs (dict): passed to region callable + + Returns: + :class:`csep.utils.spatial.CartesianGrid2D` + + """ + rupture_length = WellsAndCoppersmith.mag_length_strike_slip(mainshock_mw) * 1000 + aftershock_polygon = Polygon.from_great_circle_radius((mainshock_lon, mainshock_lat), + num_radii * rupture_length, num_points=100) + aftershock_region = masked_region(region(**kwargs), aftershock_polygon) + return aftershock_region
+ + +def grid_spacing(vertices): + """ + Figures out the length and + + Args: + vertices: Vertices describe a single node in grid. + + Returns: + dh: grid spacing + + Raises: + ValueError + + """ + # get first two vertices + a = vertices[0] + b = vertices[1] + # compute both differences, because unless point is the same one is bound to be the dh + d1 = numpy.abs(b[0] - a[0]) + d2 = numpy.abs(b[1] - a[1]) + if not numpy.allclose(d1, d2): + raise ValueError("grid spacing must be regular for cartesian grid.") + dh = numpy.max([d1, d2]) + # this would happen if the same point is repeated twice + if dh == 0: + raise ValueError("Problem computing grid spacing cannot be zero.") + return dh + +def compute_vertex(origin_point, dh, tol=numpy.finfo(float).eps): + """ + Computes the bounding box of a rectangular polygon given its origin points and spacing dh. + + Args: + origin_points: list of tuples, where tuple is (x, y) + dh: spacing + tol: used to eliminate overlapping polygons in the case of a rectangular mesh, defaults to + the machine tolerance. + + Returns: + list of polygon edges + + """ + bbox = ((origin_point[0], origin_point[1]), + (origin_point[0], origin_point[1] + dh - tol), + (origin_point[0] + dh - tol, origin_point[1] + dh - tol), + (origin_point[0] + dh - tol, origin_point[1])) + return bbox + +def compute_vertices(origin_points, dh, tol=numpy.finfo(float).eps): + """ + Wrapper function to compute vertices for multiple points. Default tolerance is set to machine precision + of floating point number. + + Args: + origin_points: 2d ndarray + + Notes: + (x,y) should be accessible like: + #>>> x_coords = origin_points[:,0] + #>>> y_coords = origin_points[:,1] + + """ + return list(map(lambda x: compute_vertex(x, dh, tol=tol), origin_points)) + +def _bin_catalog_spatio_magnitude_counts(lons, lats, mags, n_poly, mask, idx_map, binx, biny, mag_bins, tol=0.00001): + """ + Returns a list of event counts as ndarray with shape (n_poly, n_cat) where each value + represents the event counts within the polygon. + + Using [:, :, 1] index of the mask, we store the mapping between the index of n_poly and + that polygon in the mask. Additionally, the polygons are ordered such that the index of n_poly + in the result corresponds to the index of the polygons. + + Eventually, we can make a structure that could contain both of these, but the trade-offs will need + to be compared against performance. + """ + + # index in cartesian grid for events in data. note, this has a different index than the + # vector of polygons. this mapping is stored in [:,:,1] index of mask + # index in 2d grid + idx = bin1d_vec(lons, binx) + idy = bin1d_vec(lats, biny) + mag_idxs = bin1d_vec(mags, mag_bins, tol=tol, right_continuous=True) + # start with zero event counts in each bin + event_counts = numpy.zeros((n_poly, len(mag_bins))) + # does not seem that we can vectorize this part + skipped = [] + for i in range(idx.shape[0]): + if not mask[idy[i], idx[i]] and idy[i] != -1 and idx[i] != -1 and mag_idxs[i] != -1: + # getting spatial bin from mask + hash_idx = int(idx_map[idy[i], idx[i]]) + mag_idx = mag_idxs[i] + # update event counts in that polygon + event_counts[(hash_idx, mag_idx)] += 1 + else: + skipped.append((lons[i], lats[i], mags[i])) + + return event_counts, skipped + +def _bin_catalog_spatial_counts(lons, lats, n_poly, mask, idx_map, binx, biny): + """ + Returns a list of event counts as ndarray with shape (n_poly) where each value + represents the event counts within the polygon. + + Using [:, :, 1] index of the mask, we store the mapping between the index of n_poly and + that polygon in the mask. Additionally, the polygons are ordered such that the index of n_poly + in the result corresponds to the index of the polygons. + + We can make a structure that could contain both of these, but the trade-offs will need + to be compared against performance. + """ + ai, bi = binx, biny + # will return negative + idx = bin1d_vec(lons, ai) + idy = bin1d_vec(lats, bi) + # bin1d returns -1 if outside the region + # todo: think about how to change this behavior for less confusions, bc -1 is an actual value that can be chosen + bad = (idx == -1) | (idy == -1) | (mask[idy,idx] == 1) + # this can be memory optimized by keeping short list and storing index, only for case where n/2 events + event_counts = numpy.zeros(n_poly) + # selecting the indexes into polygons correspoding to lons and lats within the grid + hash_idx = idx_map[idy[~bad],idx[~bad]].astype(int) + # aggregate in counts + numpy.add.at(event_counts, hash_idx, 1) + return event_counts + +def _bin_catalog_probability(lons, lats, n_poly, mask, idx_map, binx, biny): + """ + Returns a list of event counts as ndarray with shape (n_poly) where each value + represents the event counts within the polygon. + + Using [:, :, 1] index of the mask, we store the mapping between the index of n_poly and + that polygon in the mask. Additionally, the polygons are ordered such that the index of n_poly + in the result corresponds to the index of the polygons. + + We can make a structure that could contain both of these, but the trade-offs will need + to be compared against performance. + """ + ai, bi = binx, biny + # returns -1 if outside of the bbox + idx = bin1d_vec(lons, ai) + idy = bin1d_vec(lats, bi) + bad = (idx == -1) | (idy == -1) | (mask[idy, idx] == 1) + event_counts = numpy.zeros(n_poly) + # [:,:,1] is a mapping from the polygon array to cartesian grid + hash_idx = idx_map[idy[~bad],idx[~bad]].astype(int) + # dont accumulate just set to one for probability + event_counts[hash_idx] = 1 + return event_counts + +
+[docs] +class CartesianGrid2D: + """Represents a 2D cartesian gridded region. + + The class provides functions to query onto an index 2D Cartesian grid and maintains a mapping between space coordinates defined + by polygons and the index into the polygon array. + + Custom regions can be easily created by using the from_polygon classmethod. This function will accept an arbitrary closed + polygon and return a CartesianGrid class with only points inside the polygon to be valid. + """ +
+[docs] + def __init__(self, polygons, dh, name='cartesian2d', mask=None): + self.polygons = polygons + self.poly_mask = mask + self.dh = dh + self.name = name + a, xs, ys = self._build_bitmask_vec() + # in mask, True = bad value and False = good value + self.bbox_mask = a[:,:,0] + # contains the mapping from polygon_index to the mask + self.idx_map = a[:,:,1] + # index values of polygons array into the 2d cartesian grid, based on the midpoint. + self.xs = xs + self.ys = ys + # Bounds [origin, top_right] + orgs = self.origins() + self.bounds = numpy.column_stack((orgs, orgs + dh))
+ + + def __eq__(self, other): + return self.to_dict() == other.to_dict() + + @property + def num_nodes(self): + """ Number of polygons in region """ + return len(self.polygons) + + def get_index_of(self, lons, lats): + """ Returns the index of lons, lats in self.polygons + + Args: + lons: ndarray-like + lats: ndarray-like + + Returns: + idx: ndarray-like + """ + idx = bin1d_vec(numpy.array(lons), self.xs) + idy = bin1d_vec(numpy.array(lats), self.ys) + if numpy.any(idx == -1) or numpy.any(idy == -1): + raise ValueError("at least one lon and lat pair contain values that are outside of the valid region.") + if numpy.any(self.bbox_mask[idy, idx] == 1): + raise ValueError("at least one lon and lat pair contain values that are outside of the valid region.") + return self.idx_map[idy, idx].astype(numpy.int64) + + def get_location_of(self, indices): + """ + Returns the polygon associated with the index idx. + + Args: + indices: index of polygon in region + + Returns: + Polygon + + """ + indices = list(indices) + polys = [self.polygons[idx] for idx in indices] + return polys + + def get_masked(self, lons, lats): + """Returns bool array lons and lats are not included in the spatial region. + + .. note:: The ordering of lons and lats should correspond to the ordering of the lons and lats in the data. + + Args: + lons: array-like + lats: array-like + + Returns: + idx: array-like + """ + + idx = bin1d_vec(lons, self.xs) + idy = bin1d_vec(lats, self.ys) + # handles the case where values are outside of the region + bad_idx = numpy.where((idx == -1) | (idy == -1)) + mask = self.bbox_mask[idy, idx].astype(bool) + # manually set values outside region + mask[bad_idx] = True + return mask + + def get_cartesian(self, data): + """Returns 2d ndrray representation of the data set, corresponding to the bounding box. + + Args: + data: + """ + assert len(data) == len(self.polygons) + results = numpy.zeros(self.bbox_mask.shape[:2]) + ny = len(self.ys) + nx = len(self.xs) + for i in range(ny): + for j in range(nx): + if self.bbox_mask[i, j] == 0: + idx = int(self.idx_map[i, j]) + results[i, j] = data[idx] + else: + results[i, j] = numpy.nan + return results + + def get_bbox(self): + """ Returns rectangular bounding box around region. """ + return (self.xs.min(), self.xs.max()+self.dh, self.ys.min(), self.ys.max()+self.dh) + + def midpoints(self): + """ Returns midpoints of rectangular polygons in region """ + return numpy.array([poly.centroid() for poly in self.polygons]) + + def origins(self): + """ Returns origins of rectangular polygons in region """ + return numpy.array([poly.origin for poly in self.polygons]) + + def to_dict(self): + adict = { + 'name': str(self.name), + 'dh': float(self.dh), + 'polygons': [{'lat': float(poly.origin[1]), 'lon': float(poly.origin[0])} for poly in self.polygons], + 'class_id': self.__class__.__name__ + } + return adict + + @classmethod + def from_dict(cls, adict): + """ Creates a region object from a dictionary """ + origins = adict.get('polygons', None) + dh = adict.get('dh', None) + magnitudes = adict.get('magnitudes', None) + name = adict.get('name', 'CartesianGrid2D') + + if origins is None: + raise AttributeError("cannot create region object without origins") + if dh is None: + raise AttributeError("cannot create region without dh") + if origins is not None: + try: + origins = numpy.array([[adict['lon'], adict['lat']] for adict in origins]) + except: + raise TypeError('origins must be numpy array like.') + if magnitudes is not None: + try: + magnitudes = numpy.array(magnitudes) + except: + raise TypeError('magnitudes must be numpy array like.') + + out = cls.from_origins(origins, dh=dh, magnitudes=magnitudes, name=name) + return out + + @classmethod + def from_origins(cls, origins, dh=None, magnitudes=None, name=None): + """Creates instance of class from 2d numpy.array of lon/lat origins. + + Note: Grid spacing should be constant in the entire region. This condition is not explicitly checked for for performance + reasons. + + Args: + origins (numpy.ndarray like): [:,0] = lons and [:,1] = lats + magnitudes (numpy.array like): optional, if provided will bind magnitude information to the class. + + Returns: + cls + """ + # ensure we can access the lons and lats + try: + lons = origins[:,0] + lats = origins[:,1] + except (TypeError): + raise TypeError("origins must be of type numpy.array or be numpy array like.") + + # dh must be regular, no explicit checking. + if dh is None: + dh2 = numpy.abs(lons[1]-lons[0]) + dh1 = numpy.abs(lats[1]-lats[0]) + dh = numpy.max([dh1, dh2]) + + region = CartesianGrid2D([Polygon(bbox) for bbox in compute_vertices(origins, dh)], dh, name=name) + if magnitudes is not None: + region.magnitudes = magnitudes + return region + + def _build_bitmask_vec(self): + """ + same as build mask but using vectorized calls to bin1d + """ + # build bounding box of set of polygons based on origins + nd_origins = numpy.array([poly.origin for poly in self.polygons]) + bbox = [(numpy.min(nd_origins[:, 0]), numpy.min(nd_origins[:, 1])), + (numpy.max(nd_origins[:, 0]), numpy.max(nd_origins[:, 1]))] + + # get midpoints for hashing + midpoints = numpy.array([poly.centroid() for poly in self.polygons]) + + # set up grid over bounding box + xs = cleaner_range(bbox[0][0], bbox[1][0], self.dh) + ys = cleaner_range(bbox[0][1], bbox[1][1], self.dh) + + # set up mask array, 1 is index 0 is mask + a = numpy.ones([len(ys), len(xs), 2]) + + # set all indices to nan + a[:, :, 1] = numpy.nan + + # bin1d returns the index of polygon within the cartesian grid + idx = bin1d_vec(midpoints[:, 0], xs) + idy = bin1d_vec(midpoints[:, 1], ys) + + for i in range(len(self.polygons)): + a[idy[i], idx[i], 1] = int(i) + # build mask in dim=0; here masked values are 1. see note below. + if idx[i] >= 0 and idy[i] >= 0: + if self.poly_mask is not None: + # note: csep1 gridded forecast file format convention states that a "1" indicates a valid cell, which is the opposite + # of the masking criterion + if self.poly_mask[i] == 1: + a[idy[i], idx[i], 0] = 0 + else: + a[idy[i], idx[i], 0] = 0 + + return a, xs, ys + + def tight_bbox(self, precision=4): + # creates tight bounding box around the region + poly = np.array([i.points for i in self.polygons]) + + sorted_idx = np.sort(np.unique(poly, return_index=True, axis=0)[1], kind='stable') + unique_poly = poly[sorted_idx] + + # merges all the cell polygons into one + polygons = [geometry.Polygon(np.round(i, precision)) for i in unique_poly] + joined_poly = unary_union(polygons) + bounds = np.array([i for i in joined_poly.boundary.xy]).T + + return bounds + + def get_cell_area(self): + """ Compute the area of each polygon in sq. kilometers. + + Returns: + out (numpy.array): numpy array containing cell area in km^2 + """ + area = numpy.zeros(self.num_nodes) + for idx, origin in enumerate(self.origins()): + top_right = origin + self.dh + area[idx] = geographical_area_from_bounds(origin[0], origin[1], top_right[0], top_right[1]) + return area
+ + + +def geographical_area_from_bounds(lon1, lat1, lon2, lat2): + """ + Computes area of spatial cell identified by origin coordinate and top right cooridnate. + The functions computes area only for square/rectangle bounding box by based on spherical earth assumption. + Args: + lon1,lat1 : Origin coordinates + lon2,lat2: Top right coordinates + Returns: + Area of cell in Km2 + """ + if lon1 == lon2 or lat1 == lat2: + return 0 + else: + earth_radius_km = 6371. + R2 = earth_radius_km ** 2 + rad_per_deg = numpy.pi / 180.0e0 + + strip_area_steradian = 2 * numpy.pi * (1.0e0 - numpy.cos((90.0e0 - lat1) * rad_per_deg)) \ + - 2 * numpy.pi * (1.0e0 - numpy.cos((90.0e0 - lat2) * rad_per_deg)) + area_km2 = strip_area_steradian * R2 / (360.0 / (lon2 - lon1)) + return area_km2 + +def quadtree_grid_bounds(quadk): + """ + Computes the bottom-left and top-right coordinates corresponding to every quadkey + + Args: + qk : Array of Strings + Quadkeys. + + Returns: + grid_coords : Array of floats + [lon1,lat1,lon2,lat2] + + """ + + origin_lat = [] + origin_lon = [] + top_right_lon = [] + top_right_lat = [] + + for i in range(len(quadk)): + origin_lon.append(mercantile.bounds(mercantile.quadkey_to_tile(quadk[i])).west) + origin_lat.append(mercantile.bounds(mercantile.quadkey_to_tile(quadk[i])).south) + + top_right_lon.append(mercantile.bounds(mercantile.quadkey_to_tile(quadk[i])).east) + top_right_lat.append(mercantile.bounds(mercantile.quadkey_to_tile(quadk[i])).north) + + grid_origin = numpy.column_stack((numpy.array(origin_lon), numpy.array(origin_lat))) + grid_top_right = numpy.column_stack((numpy.array(top_right_lon), numpy.array(top_right_lat))) + grid_bounds = numpy.column_stack((grid_origin, grid_top_right)) + + return grid_bounds + +def compute_vertex_bounds(bound_point, tol=numpy.finfo(float).eps): + """ + Wrapper function to compute vertices using bounding points for multiple points. Default tolerance is set to machine precision + of floating point number. + + Args: + bounding points: nx4 ndarray + [lon_origin, lat_origin, lon_top_right, lat_origin] + Notes: + (x,y) should be accessible like: + #>>> origin coords = origin_points[:,0:1] + #>>> Top right coords = origin_points[:,2:3] + """ + bbox = ((bound_point[0], bound_point[1]), + (bound_point[0], bound_point[3] - tol), + (bound_point[2] - tol, bound_point[3] - tol), + (bound_point[2] - tol, bound_point[1])) + return bbox + +def compute_vertices_bounds(bounds, tol=numpy.finfo(float).eps): + """ + Wrapper function to compute vertices using bounding points for multiple points. Default tolerance is set to machine precision + of floating point number. + + Args: + bounding points: nx4 ndarray + [lon_origin, lat_origin, lon_top_right, lat_origin] + Notes: + (x,y) should be accessible like: + #>>> origin coords = origin_points[:,0:1] + #>>> Top right coords = origin_points[:,2:3] + """ + return list(map(lambda x: compute_vertex_bounds(x, tol=tol), bounds)) + +def _create_tile(quadk, threshold, zoom, lon, lat, qk, num): + """ + **Alert: This Function uses GLOBAL variable (qk) and (num). + + Provides multi-resolution quadtree spatial grid based on seismic density. It takes in a starting quadtree Tile (Quadkey), + then keeps on increasing the zoom-level of every Tile (or dividing cell) recursively, unless every cell meets the cell dividion criteria. + + The primary criterion of dividing a parent cell into 4 child cells is a threshold on seismic denisity. + The cells are divided unless evevry cell cas number of earthquakes less than "threshold". + The cell division of any also stops if it reaches maximum zoom-level (zoom) + + Args: + quadk : String + 0, 1, 2, 3 or any desired starting level of Quad key. + threshold : int + Max number of earthquakes/cell allowed + zoom: int + Maximum zoom level allowed for a quadkey + lon : float + longitudes of earthquakes in catalog + lat : float + latitude of earthquakes in catalog + + Returns: + """ + boundary = mercantile.bounds(mercantile.quadkey_to_tile(quadk)) + eqs = numpy.logical_and(numpy.logical_and(lon >= boundary.west, lat >= boundary.south), + numpy.logical_and(lon < boundary.east, lat < boundary.north)) + num_eqs = numpy.size(lat[eqs]) + # global qk + # global num + + # Setting the Min Threshold of Area 1 sq. km. Instead of Depth + + if num_eqs > threshold and len(quadk) < zoom: # #qk_area_km(quadk)>4: + # print('inside If, Current Quad key ', quadk) + # print('Length of Quadkey ', len(quadk)) + # # print('Num of Eqs ', num_eqs) + + _create_tile(quadk + '0', threshold, zoom, lon, lat, qk, num) + + _create_tile(quadk + '1', threshold, zoom, lon, lat, qk, num) + + _create_tile(quadk + '2', threshold, zoom, lon, lat, qk, num) + + _create_tile(quadk + '3', threshold, zoom, lon, lat, qk, num) + + else: + # print('inside ELSE, Current Quad key ', quadk) + # print('Num of Eqs ', num_eqs) + # qk = numpy.append(qk, quadk) + qk.append(quadk) + # num = numpy.append(num, num_eqs) + num.append(num_eqs) + +def _create_tile_fix_len(quadk, zoom, qk): + """ + ***Alert: This Function uses GLOBAL variable (qk). + + Provides single-resolution quadtree grid. It takes in a starting quadkey (or Quadrant of Globe), + then keeps on keeps on dividing it into 4 children unless the maximum zoom-level is achieved + Parameters + ---------- + quadk : String + 0, 1, 2, 3 or any desired starting level of Quad key. + zoom : TYPE + Length of Quad Key OR Depth of grid. + + + Returns + ------- + None. + """ + + if len(quadk) < zoom: + # print('inside If, Current Quad key ', quadk) + # print('Len of QK: ', len(quadk)) + + _create_tile_fix_len(quadk + '0', zoom, qk) + + _create_tile_fix_len(quadk + '1', zoom, qk) + + _create_tile_fix_len(quadk + '2', zoom, qk) + + _create_tile_fix_len(quadk + '3', zoom, qk) + + else: + # print('inside ELSE, Current Quad key ', quadk) + # print('Num of Eqs ', num_eqs) + # qk = numpy.append(qk, quadk) + qk.append(quadk) + +class QuadtreeGrid2D: + """ + Respresents a 2D quadtree gridded region. The class provides functionality to generate multi-resolution or single-resolution quadtree grid. + It also enables users to load already available quadtree grird. It also provides functions to query onto an index 2D grid ad maintains mapping + between space coordinates and defined polygons and the index into the polygon array. + + Note: It is replica of CartesianGrid2D class but with quadtree approach, with implementation of all the relevant functions required to CSEP1 tests + + """ + + def __init__(self, polygons, quadkeys, bounds, name='QuadtreeGrid2d', mask=None): + """ + Args: + polygons: Represents the object of class "polygons" defined through a collection of vertices. + This polygon is 2d and vertices are obtained as corner points of quadtree tile. + quadkeys: Unique identifier of each quadtree tile. Quadkeys of every tile defines a grid cell. + This is the first thing computed while acquiring quadtree grid. Rest can be computed from this. + bounds: number of cells x [lon1, lat1, lon2, lat2], corresponding to origin coordinates and top right coordinates fo each grid cell + name: Name of grid + mask: Masked cells. NotImplemented yet. Always keep it none + """ + self.polygons = polygons + self.quadkeys = quadkeys + self.bounds = bounds + self.cell_area = [] + self.poly_mask = mask + self.name = name + # a, xs, ys = self._get_idx_map_xs_ys() + # self.xs = xs + # self.ys = ys + # self.idx_map = a + + @property + def num_nodes(self): + """ Number of polygons in region """ + return len(self.polygons) + + def get_cell_area(self): + """ + Calls function geographical_area_from_bounds and computes area of each grid cell. It also modified class variable "self.cell_area" + It iterates over all the cells of grid and passes bounding coordinates of every cell to function geographical_area_from_bounds + """ + cell_area = numpy.array([geographical_area_from_bounds(bb[0],bb[1],bb[2],bb[3]) for bb in self.bounds]) + self.cell_area = cell_area + return self.cell_area + + def get_index_of(self, lons, lats): + """ Returns the index of lons, lats in self.polygons + + Args: + lons: ndarray-like + lats: ndarray-like + + Returns: + idx: ndarray-like + """ + # If its array or many coords + if isinstance(lons, (list, numpy.ndarray)): + idx = [] + for i in range(len(lons)): + idx = numpy.append(idx, self._find_location(lons[i], lats[i])) + idx = idx.astype(int) + return idx + # It its just one Lon/Lon + if isinstance(lons, (int, float)): + idx = self._find_location(lons, lats) + return idx + return None + + def _find_location(self, lon, lat): + """ Takes in single Lon and Lat and finds its Polygon Index. + + Returns: + index number of polyons + """ + loc = numpy.logical_and(numpy.logical_and(lon >= self.bounds[:, 0], lat >= self.bounds[:, 1]), + numpy.logical_and(lon < self.bounds[:, 2], lat < self.bounds[:, 3])) + if len(numpy.where(loc == True)[0]) > 0: + return numpy.where(loc == True)[0][0] + else: + return numpy.where(loc == True)[0] + + def get_location_of(self, indices): + """ Returns the polygon associated with the index idx. + + Args: + idx: index of polygon in region + + Returns: + Polygon + """ + indices = list(indices) + polys = [self.polygons[idx] for idx in indices] + return polys + + def _get_spatial_counts(self, catalog, mag_bins=None): + """ Gets the number of earthquakes in each cell for available catalog. + Uses QuadtreeGrid2D.get_index_of function to map every earthquake location to its corresponding cell + + Args: + catalog: CSEP Catalog + mag_bins: Magnitude discritization used in earthquake forecast mdoel + Note: mag_bins are only required to filter catalog for minimum magnitude + + Return: + spatial counts: Number of earthquakes in each cell + + """ + if mag_bins is None or mag_bins == []: + mag_bins = catalog.magnitudes + + if min(catalog.get_magnitudes()) < min(mag_bins): + print("-----Warning-----") + print("Catalog contains magnitudes below the min magnitude range") + print("Filtering catalog with Magnitude: ", min(mag_bins)) + catalog.filter('magnitude >= ' + str(min(mag_bins))) + + if min(catalog.get_latitudes()) < self.get_bbox()[2] or max(catalog.get_latitudes()) > self.get_bbox()[3]: + print("----Warning---") + print("Catalog exceeds grid bounds, so catalog filtering") + catalog.filter('latitude < ' + str(self.get_bbox()[3])) + catalog.filter('latitude > ' + str(self.get_bbox()[2])) + + lon = catalog.get_longitudes() + lat = catalog.get_latitudes() + + out = numpy.zeros(len(self.quadkeys)) + idx = self.get_index_of(lon, lat) + numpy.add.at(out, idx, 1) + + return out + + def _get_spatial_magnitude_counts(self, catalog, mag_bins=None): + """ + Gets the number of earthquakes in for each spatio-magnitude bin for available catalog + Uses QuadtreeGrid2D.get_index_of function to map every earthquake location to its corresponding cell + Uses bin1d_vec function to map earthquake magnitude to its respecrtive bin. + + Args: + catalog: CSEPCatalog + mag_bins: Magnitude discritization used in earthquake forecast model + + Return: + Spatial-magnitude counts + + """ + if mag_bins is None or mag_bins == []: + mag_bins = catalog.magnitudes + + if min(catalog.get_magnitudes()) < min(mag_bins): + print("-----Warning-----") + print("Catalog contains magnitudes below the min magnitude range") + print("Filtering catalog with Magnitude: ", min(mag_bins)) + catalog.filter('magnitude >= ' + str(min(mag_bins))) + + if min(catalog.get_latitudes()) < self.get_bbox()[2] or max(catalog.get_latitudes()) > self.get_bbox()[3]: + print("----Warning---") + print("Catalog exceeds grid bounds filtering events outside of the region boundary") + catalog.filter('latitude < ' + str(self.get_bbox()[3])) + catalog.filter('latitude > ' + str(self.get_bbox()[2])) + + lon = catalog.get_longitudes() + lat = catalog.get_latitudes() + mag = catalog.get_magnitudes() + out = numpy.zeros([len(self.quadkeys), len(mag_bins)]) + + idx_loc = self.get_index_of(lon, lat) + idx_mag = bin1d_vec(mag, mag_bins, tol=0.00001, right_continuous=True) + + numpy.add.at(out, (idx_loc, idx_mag), 1) + + return out + + def get_bbox(self): + """ Returns rectangular bounding box around region. """ + # return (self.xs.min(), self.xs.max(), self.ys.min(), self.ys.max()) + return (min(self.bounds[:, 0]), max(self.bounds[:, 2]), min(self.bounds[:, 1]), max(self.bounds[:, 3])) + + def midpoints(self): + """ Returns midpoints of rectangular polygons in region """ + return numpy.array([poly.centroid() for poly in self.polygons]) + + def origins(self): + """ Returns origins of rectangular polygons in region """ + return numpy.array([poly.origin for poly in self.polygons]) + + def to_dict(self): + adict = { + 'name': str(self.name), + 'polygons': [{'lat': float(poly.origin[1]), 'lon': float(poly.origin[0])} for poly in self.polygons] + } + return adict + + def save_quadtree(self, filename): + """ Saves the quadtree grid (quadkeys) in a text file + + Args: + filename (str): filename to store file + """ + numpy.savetxt(filename, self.quadkeys, delimiter=',', fmt='%s') + + @classmethod + def from_catalog(cls, catalog, threshold, zoom=11, magnitudes=None, name=None): + """ + Creates instance of class from 2d numpy.array of lon/lat of Catalog. + Provides multi-resolution quadtree spatial grid based on seismic density. It starts from whole globe as 4 cells (Quadkeys:'0','1','2','3'), + then keeps on increasing the zoom-level of every Tile recursively, unless every cell meets the division criteria. + + The primary criterion of dividing a parent cell into 4 child cells is a threshold on seismic density. + The cells are divided unless every cell has number of earthquakes less than "threshold". + The division of a cell also stops if it reaches maximum zoom-level (zoom) + + Args: + catalog (CSEPCatalog): catalog used to create quadtree + threshold (int): Max earthquakes allowed per cells + zoom (int): Max zoom allowed for a cell + magnitudes (array-like): left end values of magnitude discretization + + Returns: + instance of QuadtreeGrid2D + """ + + lon = catalog.get_longitudes() + lat = catalog.get_latitudes() + + qk = [] + num = [] + + _create_tile('0', threshold, zoom, lon, lat, qk, num) + _create_tile('1', threshold, zoom, lon, lat, qk, num) + _create_tile('2', threshold, zoom, lon, lat, qk, num) + _create_tile('3', threshold, zoom, lon, lat, qk, num) + + qk = numpy.array(qk) + bounds = quadtree_grid_bounds(qk) + region = QuadtreeGrid2D( + [Polygon(bbox) for bbox in compute_vertices_bounds(bounds)], + qk, + bounds, + name=name) + + if magnitudes is not None: + region.magnitudes = magnitudes + + return region + + @classmethod + def from_single_resolution(cls, zoom, magnitudes=None, name=None): + """ Creates instance of class at single-resolution using provided zoom-level. + Provides single-resolution quadtree grid. It starts from whole globe as 4 cells (Quadkeys:'0','1','2','3'), + then keeps on keeps on dividing every cell into 4 children unless the maximum zoom-level is achieved + + Args: + zoom: Max zoom allowed for a cell + magnitude: magnitude discretization + + Returns: + instance of QuadtreeGrid2D + """ + + qk = [] + _create_tile_fix_len('0', zoom, qk) + _create_tile_fix_len('1', zoom, qk) + _create_tile_fix_len('2', zoom, qk) + _create_tile_fix_len('3', zoom, qk) + + qk = numpy.array(qk) + + bounds = quadtree_grid_bounds(qk) + + region = QuadtreeGrid2D([Polygon(bbox) for bbox in compute_vertices_bounds(bounds)], qk, bounds, + name=name) + + if magnitudes is not None: + region.magnitudes = magnitudes + return region + + @classmethod + def from_quadkeys(cls, quadk, magnitudes=None, name=None): + """ Creates instance of class from available quadtree grid. + + Args: + quadk (list): List of quad keys strings corresponding to an already available quadtree grid + magnitudes (array-like): left end-points of magnitude discretization + + Returns: + instance of QuadtreeGrid2D + """ + bounds = quadtree_grid_bounds(numpy.array(quadk)) + + region = QuadtreeGrid2D([Polygon(bbox) for bbox in compute_vertices_bounds(bounds)], quadk, bounds, + name=name) + + if magnitudes is not None: + region.magnitudes = magnitudes + + return region + + def _get_idx_map_xs_ys(self): + print('inside _get_idx_map') + nd_origins = numpy.array([poly.origin for poly in self.polygons]) + xs = numpy.unique(nd_origins[:, 0]) + ys = numpy.unique(nd_origins[:, 1]) + ny = len(ys) + nx = len(xs) + #Get the index map + a = numpy.zeros([ny, nx]) + for i in range(nx): + for j in range(ny): + idx = self.get_index_of(xs[i], ys[j]) + a[j, i] = idx + return a, xs, ys + + def get_cartesian(self, data): + """ Returns 2d ndrray representation of the data set, corresponding to the bounding box. + + Args: + data (numpy.array): array of values corresponding to cells in the quadtree region + + Returns: + results (numpy.array): 2d numpy array with rates on cartesian grid + + """ + a, xs, ys = self._get_idx_map_xs_ys() + self.xs = xs + self.ys = ys + self.idx_map = a + assert len(data) == len(self.polygons) + ny = len(self.ys) + nx = len(self.xs) + results = numpy.zeros([ny, nx]) + for i in range(nx): + for j in range(ny): + idx = int(self.idx_map[j,i]) + results[j, i] = data[idx] + return results + + def tight_bbox(self): + # creates tight bounding box around the region, probably a faster way to do this. + ny, nx = self.idx_map.shape + asc = [] + desc = [] + for j in range(ny): + row = self.idx_map[j, :] + argmin = first_nonnan(row) + argmax = last_nonnan(row) + # points are stored clockwise + poly_min = self.polygons[int(row[argmin])].points + asc.insert(0, poly_min[0]) + asc.insert(0, poly_min[1]) + poly_max = self.polygons[int(row[argmax])].points + lat_0 = poly_max[2][1] + lat_1 = poly_max[3][1] + # last two points are 'right hand side of polygon' + if lat_0 < lat_1: + desc.append(poly_max[2]) + desc.append(poly_max[3]) + else: + desc.append(poly_max[3]) + desc.append(poly_max[2]) + # close the loop + poly = np.array(asc + desc) + sorted_idx = np.sort(np.unique(poly, return_index=True, axis=0)[1], kind='stable') + unique_poly = poly[sorted_idx] + unique_poly = np.append(unique_poly, [unique_poly[0, :]], axis=0) + return unique_poly + + +def california_quadtree_region(magnitudes=None, name="california-quadtree"): + """ + Returns object of QuadtreeGrid2D representing quadtree grid for California RELM testing region. + The grid is already generated at zoom-level = 12 and it is loaded through classmethod: QuadtreeGrid2D.from_quadkeys + The grid cells at zoom level 12 are selected using the external boundary of RELM california region. + This grid can be used to create gridded datasets for earthquake forecasts. + + + Args: + magnitudes: Magnitude discretization + name: string + + Returns: + :class:`csep.core.spatial.QuadtreeGrid2D + + """ + # use default file path from python package + root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + filepath = os.path.join(root_dir, 'artifacts', 'Regions', 'california_qk_zoom=12.txt') + qk = numpy.genfromtxt(filepath, delimiter=',', dtype='str') + california_region = QuadtreeGrid2D.from_quadkeys(qk, magnitudes=magnitudes, name=name) + return california_region + +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/csep/utils/basic_types.html b/_modules/csep/utils/basic_types.html new file mode 100644 index 00000000..aff8b434 --- /dev/null +++ b/_modules/csep/utils/basic_types.html @@ -0,0 +1,291 @@ + + + + + + + + csep.utils.basic_types — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep.utils.basic_types

+import collections
+
+import numpy as np
+
+from csep.utils.calc import bin1d_vec
+
+
+def seq_iter(iterable):
+    """
+    helper function to handle iterating over a dict or list
+
+    should iterate using:
+
+    for idx in iterable:
+         value = iterable[idx]
+         ...
+
+    Args:
+        iterable: an iterable object
+
+    Returns:
+        key to access iterable
+
+    """
+    return iterable if isinstance(iterable, dict) else range(len(iterable))
+
+
+
+[docs] +class AdaptiveHistogram: + """ + Allows us to work with data that need to be discretized and aggregated even though the the global min/max values + are not known before hand. Data are discretized according to the dh and anchor positions and their extreme values. + If necessary the range of the bin_edges are expanded to accommodate new data + + Using this function incurs some addition overhead, instead of simply just binning and combining. + + """ +
+[docs] + def __init__(self, dh=0.1, anchor=0.0): + self.dh = dh + self.anchor = anchor + self.data = np.array([]) + self.bins = np.array([])
+ + + def add(self, data): + + if len(data) == 0: + return + + # float point arithmitic can be an issue here + data_min = np.min(data) + data_max = np.max(data) + + # need to know the range of the data to be inserted on discretized grid (min, max) + # this is to determine the discretization of the data + eps = np.finfo(np.float64).eps + disc_min = np.floor((data_min+eps-self.anchor)*self.rec_dh)/self.rec_dh+self.anchor + disc_max = np.ceil((data_max+eps-self.anchor)*self.rec_dh)/self.rec_dh+self.anchor + + # compute new bin edges from data + new_bins = np.arange(disc_min, disc_max+self.dh/2, self.dh) + + # merge data + self._merge(new_bins, data) + + def _merge(self, bins, data): + + # 1) current bins dont exist + if self.bins.size == 0: + self.bins = bins + self.data = np.zeros(len(self.bins)) + idx = bin1d_vec(data, self.bins) + np.add.at(self.data, idx, 1) + return + + # 2) new bins subset of current bins + if bins[0] >= self.bins[0] and bins[-1] <= self.bins[-1]: + idx = bin1d_vec(data, self.bins) + np.add.at(self.data,idx,1) + return + + # 3) new bins are outside current bins + if bins[0] < self.bins[0]: + bin_min = bins[0] + else: + bin_min = self.bins[0] + + if bins[-1] > self.bins[-1]: + bin_max = bins[-1] + else: + bin_max = self.bins[-1] + + # generate new bins + new_bins = np.arange(bin_min, bin_max+self.dh/2, self.dh) + tmp_data = np.zeros(len(new_bins)) + # merge new data to new bins + # get old bin locations relative to new bins + idx = bin1d_vec(self.bins, new_bins) + # add old data + tmp_data[idx] = self.data + self.data = tmp_data + idx = bin1d_vec(data, new_bins) + np.add.at(self.data, idx, 1) + self.bins = new_bins + return + + @property + def rec_dh(self): + return 1.0 / self.dh
+ + + +def transpose_dict(adict): + """Transposes a dict of dicts to regroup the data.""" + out = collections.defaultdict(dict) + for k,v in adict.items(): + for ik,iv in v.items(): + out[ik][k] = iv + return out +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/csep/utils/calc.html b/_modules/csep/utils/calc.html new file mode 100644 index 00000000..0b13db0f --- /dev/null +++ b/_modules/csep/utils/calc.html @@ -0,0 +1,416 @@ + + + + + + + + csep.utils.calc — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep.utils.calc

+# Third-party imports
+import numpy
+import scipy.interpolate
+
+# PyCSEP imports
+from csep.core.exceptions import CSEPException
+from csep.utils.stats import binned_ecdf, sup_dist, get_quantiles
+from csep.utils import flat_map_to_ndarray
+
+
+
+[docs] +def nearest_index(array, value): + """ + Returns the index from array that is less than the value specified. + """ + array = numpy.asarray(array) + idx = (numpy.abs(array - value)).argmin() + return idx
+ + +
+[docs] +def find_nearest(array, value): + """ + Returns the value from array that is less than the value specified. + """ + array = numpy.asarray(array) + idx = nearest_index(array, value) + return array[idx]
+ + +
+[docs] +def func_inverse(x, y, val, kind='nearest', **kwargs): + """ + Returns the value of a function based on interpolation. + """ + f = scipy.interpolate.interp1d(x, y, kind=kind, **kwargs) + return f(val)
+ + +
+[docs] +def discretize(data, bin_edges, right_continuous=False): + """ + returns array with len(bin_edges) consisting of the discretized values from each bin. + instead of returning the counts of each bin, this will return an array with values + modified such that any value within bin_edges[0] <= x_new < bin_edges[1] ==> bin_edges[0]. + + This implementation forces you to define a bin edge that contains the data. + """ + bin_edges = numpy.array(bin_edges) + if bin_edges.size == 0: + raise ValueError("bin_edges must not be empty") + if bin_edges[1] < bin_edges[0]: + raise ValueError("bin_edges must be increasing") + data = numpy.array(data) + idx = bin1d_vec(data, bin_edges, right_continuous=right_continuous) + if numpy.any(idx == -1): + raise CSEPException("Discretized values should all be within bin_edges") + x_new = bin_edges[idx] + return x_new
+ + +
+[docs] +def bin1d_vec(p, bins, tol=None, right_continuous=False): + """Efficient implementation of binning routine on 1D Cartesian Grid. + + Returns the indices of the points into bins. Bins are inclusive on the lower bound + and exclusive on the upper bound. In the case where a point does not fall within the bins a -1 + will be returned. The last bin extends to infinity when right_continuous is set as true. + + Args: + p (array-like): Point(s) to be placed into b + bins (array-like): bins to considering for binning, must be monotonically increasing + right_continuous (bool): if true, consider last bin extending to infinity + + Returns: + idx (array-like): indexes hashed into grid + + Raises: + ValueError: + """ + bins = numpy.array(bins) + p = numpy.array(p) + a0 = numpy.min(bins) + # if user supplies only a single bin, do 2 things: 1) fix right continuous to true, and use of h is arbitrary + if bins.size == 1: + right_continuous = True + h = 1 + else: + h = bins[1] - bins[0] + + a0_tol = numpy.abs(a0) * numpy.finfo(numpy.float64).eps + h_tol = numpy.abs(h) * numpy.finfo(numpy.float64).eps + p_tol = numpy.abs(p) * numpy.finfo(numpy.float64).eps + + # absolute tolerance + if tol is None: + idx = numpy.floor((p + (p_tol + a0_tol) - a0) / (h - h_tol)) + else: + idx = numpy.floor((p + (tol + a0_tol) - a0) / (h - h_tol)) + if h < 0: + raise ValueError("grid spacing must be positive and monotonically increasing.") + # account for floating point uncertainties by considering extreme case + + if right_continuous: + # set upper bin index to last + try: + idx[(idx < 0)] = -1 + idx[(idx >= len(bins) - 1)] = len(bins) - 1 + except TypeError: + if idx >= len(bins) - 1: + idx = len(bins) - 1 + if idx < 0: + idx = -1 + else: + try: + idx[((idx < 0) | (idx >= len(bins)))] = -1 + except TypeError: + if idx < 0 or idx >= len(bins): + idx = -1 + try: + idx = idx.astype(numpy.int64) + except AttributeError: + idx = int(idx) + return idx
+ + +def _compute_likelihood(gridded_data, apprx_rate_density, expected_cond_count, n_obs): + # compute pseudo likelihood + idx = gridded_data != 0 + + # this value is: -inf obs at idx and no apprx_rate_density + # -expected_cond_count if no target earthquakes + n_events = numpy.sum(gridded_data) + if n_events == 0: + return(-expected_cond_count, numpy.nan) + else: + with numpy.errstate(divide='ignore'): + likelihood = numpy.sum(gridded_data[idx] * numpy.log(apprx_rate_density[idx])) - expected_cond_count + + + # cannot compute the spatial statistic score if there are no target events or forecast is computed undersampled + if n_obs == 0 or expected_cond_count == 0: + return (likelihood, numpy.nan) + + # normalize the rate density to sum to unity + norm_apprx_rate_density = apprx_rate_density / numpy.sum(apprx_rate_density) + + # value could be: -inf if no value in apprx_rate_dens + # nan if n_cat is 0 + with numpy.errstate(divide='ignore'): + likelihood_norm = numpy.sum(gridded_data[idx] * numpy.log(norm_apprx_rate_density[idx])) / n_events + + return (likelihood, likelihood_norm) + +def _compute_approximate_likelihood(gridded_data, apprx_forecasted_rate): + """ Computes the approximate likelihood from Rhoades et al., 2011; Equation 4 + + Args: + gridded_data (ndarray): observed counts on spatial grid + mean_rate_density (ndarray): mean rates from forecast + + Notes: + Mean rates from the forecast are assumed to not have any zeros. + """ + n_obs = numpy.sum(gridded_data) + return numpy.sum(gridded_data*numpy.log10(apprx_forecasted_rate)) - n_obs + +def _compute_spatial_statistic(gridded_data, log10_probability_map): + """ + aggregates the log1 + Args: + gridded_data: + log10_probability_map: + """ + # returns a unique set of indexes corresponding to cells where earthquakes occurred + # this should implement similar logic to the spatial tests wrt undersampling. + # techincally, if there are are target eqs you can't compute this statistic. + if numpy.sum(gridded_data) == 0: + return numpy.nan + idx = numpy.unique(numpy.argwhere(gridded_data)) + return numpy.sum(log10_probability_map[idx]) + +def _distribution_test(stochastic_event_set_data, observation_data): + + # for cached files want to write this with memmap + union_catalog = flat_map_to_ndarray(stochastic_event_set_data) + min_time = 0.0 + max_time = numpy.max([numpy.max(numpy.ceil(union_catalog)), numpy.max(numpy.ceil(observation_data))]) + + # build test_distribution with 100 data points + num_points = 100 + tms = numpy.linspace(min_time, max_time, num_points, endpoint=True) + + # get combined ecdf and obs ecdf + combined_ecdf = binned_ecdf(union_catalog, tms) + obs_ecdf = binned_ecdf(observation_data, tms) + + # build test distribution + n_cat = len(stochastic_event_set_data) + test_distribution = [] + for i in range(n_cat): + test_ecdf = binned_ecdf(stochastic_event_set_data[i], tms) + # indicates there were zero events in catalog + if test_ecdf is not None: + d = sup_dist(test_ecdf[1], combined_ecdf[1]) + test_distribution.append(d) + d_obs = sup_dist(obs_ecdf[1], combined_ecdf[1]) + + # score evaluation + _, quantile = get_quantiles(test_distribution, d_obs) + + return test_distribution, d_obs, quantile + +def cleaner_range(start, end, h): + """ Returns array holding bin edges that doesn't contain floating point wander. + + Floating point wander can occur when repeatedly adding floating point numbers together. The errors propogate and become worse over the sum. This function generates the + values on an integer grid and converts back to floating point numbers through multiplication. + + Args: + start (float) + end (float) + h (float): magnitude spacing + + Returns: + bin_edges (numpy.ndarray) + """ + # convert to integers to prevent accumulating floating point errors + const = 100000 + start = numpy.floor(const * start) + end = numpy.floor(const * end) + d = const * h + return numpy.arange(start, end + d / 2, d) / const + +def first_nonnan(arr, axis=0, invalid_val=-1): + mask = arr==arr + return numpy.where(mask.any(axis=axis), mask.argmax(axis=axis), invalid_val) + +def last_nonnan(arr, axis=0, invalid_val=-1): + mask = arr==arr + val = arr.shape[axis] - numpy.flip(mask, axis=axis).argmax(axis=axis) - 1 + return numpy.where(mask.any(axis=axis), val, invalid_val) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/csep/utils/comcat.html b/_modules/csep/utils/comcat.html new file mode 100644 index 00000000..b7473bd1 --- /dev/null +++ b/_modules/csep/utils/comcat.html @@ -0,0 +1,1484 @@ + + + + + + + + csep.utils.comcat — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep.utils.comcat

+# python imports
+from datetime import datetime, timedelta, timezone
+from urllib import request
+from urllib.error import HTTPError, URLError
+from urllib.parse import urlparse, urlencode
+import ssl
+import json
+import time
+from collections import OrderedDict
+import re
+from enum import Enum
+import sys
+
+# 3rd-party imports
+import numpy as np
+import pandas as pd
+
+# note: should consider how to remove these dependencies; should we just delete the functionality
+import dateutil
+from obspy.core.event import read_events
+
+# PyCSEP imports
+from csep.utils.time_utils import HistoricTime
+
+# url template for counting events
+HOST = 'earthquake.usgs.gov'
+SEARCH_TEMPLATE = 'https://[HOST]/fdsnws/event/1/query?format=geojson'
+TIMEOUT = 120  # how long do we wait for a url to return?
+TIMEFMT = '%Y-%m-%dT%H:%M:%S'
+WAITSECS = 3  # number of seconds to wait after failing download before trying again
+SEARCH_LIMIT = 20000  # maximum number of events ComCat will return in one search
+URL_TEMPLATE = ('https://earthquake.usgs.gov/earthquakes/feed'
+                '/v1.0/detail/[EVENTID].geojson')
+# the search template for a detail event that may
+# include one or both of includesuperseded/includedeleted.
+SEARCH_DETAIL_TEMPLATE = ('https://earthquake.usgs.gov/fdsnws/event/1/query'
+                          '?format=geojson&eventid=%s&'
+                          'includesuperseded=%s&includedeleted=%s')
+
+
+class VersionOption(Enum):
+    LAST = 1
+    FIRST = 2
+    ALL = 3
+    PREFERRED = 4
+
+
+
+
+def _get_time_segments(starttime, endtime, minmag):
+    if starttime is None:
+        starttime = HistoricTime.utcnow() - timedelta(days=30)
+    if endtime is None:
+        endtime = HistoricTime.utcnow()
+    # earthquake frequency table: minmag:earthquakes per day
+    freq_table = {0: 3000 / 7,
+                  1: 3500 / 14,
+                  2: 3000 / 18,
+                  3: 4000 / 59,
+                  4: 9000 / 151,
+                  5: 3000 / 365,
+                  6: 210 / 365,
+                  7: 20 / 365,
+                  8: 5 / 365,
+                  9: 0.05 / 365}
+
+    floormag = int(np.floor(minmag))
+    ndays = (endtime - starttime).days + 1
+    freq = freq_table[floormag]
+    nsegments = int(np.ceil((freq * ndays) / SEARCH_LIMIT))
+    days_per_segment = int(np.ceil(ndays / nsegments))
+    segments = []
+    startseg = starttime
+    endseg = starttime
+    while startseg <= endtime:
+        endseg = min(endtime, startseg + timedelta(days_per_segment))
+        segments.append((startseg, endseg))
+        startseg += timedelta(days=days_per_segment, microseconds=1)
+    return segments
+
+def _search(**newargs):
+    if 'starttime' in newargs:
+        newargs['starttime'] = newargs['starttime'].strftime(TIMEFMT)
+    if 'endtime' in newargs:
+        newargs['endtime'] = newargs['endtime'].strftime(TIMEFMT)
+    if 'updatedafter' in newargs:
+        newargs['updatedafter'] = newargs['updatedafter'].strftime(TIMEFMT)
+    if 'host' in newargs and newargs['host'] is not None:
+        template = SEARCH_TEMPLATE.replace('[HOST]', newargs['host'])
+        del newargs['host']
+    else:
+        template = SEARCH_TEMPLATE.replace('[HOST]', HOST)
+
+    paramstr = urlencode(newargs)
+    url = template + '&' + paramstr
+    events = []
+    # handle the case when they're asking for an event id
+    if 'eventid' in newargs:
+        return DetailEvent(url)
+
+    try:
+        fh = request.urlopen(url, timeout=TIMEOUT)
+        data = fh.read().decode('utf8')
+        fh.close()
+        jdict = json.loads(data)
+        events = []
+        for feature in jdict['features']:
+            events.append(SummaryEvent(feature))
+    except HTTPError as htpe:
+        if htpe.code == 503:
+            try:
+                time.sleep(WAITSECS)
+                fh = request.urlopen(url, timeout=TIMEOUT)
+                data = fh.read().decode('utf8')
+                fh.close()
+                jdict = json.loads(data)
+                events = []
+                for feature in jdict['features']:
+                    events.append(SummaryEvent(feature))
+            except Exception as msg:
+                raise Exception(
+                    'Error downloading data from url %s.  "%s".' % (url, msg))
+
+    except ssl.SSLCertVerificationError as SSLe:
+        # Fails to verify SSL certificate, when there is a hostname mismatch
+        if SSLe.verify_code == 62:
+            try:
+                context = ssl._create_unverified_context()
+                fh = request.urlopen(url, timeout=TIMEOUT, context=context)
+                data = fh.read().decode('utf8')
+                fh.close()
+                jdict = json.loads(data)
+                events = []
+                for feature in jdict['features']:
+                    events.append(SummaryEvent(feature))
+            except Exception as msg:
+                raise Exception(
+                    'Error downloading data from url %s.  "%s".' % (url, msg))
+
+    except URLError as URLe:
+        # Fails to verify SSL certificate, when there is a hostname mismatch
+        if (isinstance(URLe.reason, ssl.SSLCertVerificationError) and URLe.reason.verify_code == 62) \
+                or (isinstance(URLe.reason, ssl.SSLError) and URLe.reason.errno == 5):
+            try:
+                context = ssl._create_unverified_context()
+                fh = request.urlopen(url, timeout=TIMEOUT, context=context)
+                data = fh.read().decode('utf8')
+                fh.close()
+                jdict = json.loads(data)
+                events = []
+                for feature in jdict['features']:
+                    events.append(SummaryEvent(feature))
+            except Exception as msg:
+                raise Exception(
+                    'Error downloading data from url %s.  "%s".' % (url, msg))
+
+    except Exception as msg:
+        raise Exception(
+            'Error downloading data from url %s.  "%s".' % (url, msg))
+
+    return events
+
+class SummaryEvent(object):
+    """Wrapper around summary feature as returned by ComCat GeoJSON search results.
+    """
+
+    def __init__(self, feature):
+        """Instantiate a SummaryEvent object with a feature.
+        See summary documentation here:
+        https://earthquake.usgs.gov/earthquakes/feed/v1.0/geojson.php
+        Args:
+            feature (dict): GeoJSON feature as described at above URL.
+        """
+        self._jdict = feature.copy()
+
+    @property
+    def location(self):
+        """Earthquake location string.
+        Returns:
+            str: Earthquake location.
+        """
+        return self._jdict['properties']['place']
+
+    @property
+    def url(self):
+        """ComCat URL.
+        Returns:
+            str: ComCat URL
+        """
+        return self._jdict['properties']['url']
+
+    @property
+    def latitude(self):
+        """Authoritative origin latitude.
+        Returns:
+            float: Authoritative origin latitude.
+        """
+        return self._jdict['geometry']['coordinates'][1]
+
+    @property
+    def longitude(self):
+        """Authoritative origin longitude.
+        Returns:
+            float: Authoritative origin longitude.
+        """
+        return self._jdict['geometry']['coordinates'][0]
+
+    @property
+    def depth(self):
+        """Authoritative origin depth.
+        Returns:
+            float: Authoritative origin depth.
+        """
+        return self._jdict['geometry']['coordinates'][2]
+
+    @property
+    def id(self):
+        """Authoritative origin ID.
+        Returns:
+            str: Authoritative origin ID.
+        """
+        ## comcat has an id key in each feature, whereas bsi has eventId within the properties dict
+        try:
+            return self._jdict['id']
+        except:
+            return self._jdict['properties']['eventId']
+
+    @property
+    def time(self):
+        """Authoritative origin time.
+        Returns:
+            datetime: Authoritative origin time.
+        """
+        time_in_msec = self._jdict['properties']['time']
+        # Comcat gives the event time in a ms timestamp, whereas bsi in datetime isoformat
+        if isinstance(time_in_msec, str):
+            event_dtime = datetime.fromisoformat(time_in_msec).replace(tzinfo=timezone.utc)
+            time_in_msec = event_dtime.timestamp() * 1000
+        time_in_sec = time_in_msec // 1000
+        msec = time_in_msec - (time_in_sec * 1000)
+        dtime = datetime.utcfromtimestamp(time_in_sec)
+        dt = timedelta(milliseconds=msec)
+        dtime = dtime + dt
+        return dtime
+
+    @property
+    def magnitude(self):
+        """Authoritative origin magnitude.
+        Returns:
+            float: Authoritative origin magnitude.
+        """
+        return self._jdict['properties']['mag']
+
+    def __repr__(self):
+        tpl = (self.id, str(self.time), self.latitude,
+               self.longitude, self.depth, self.magnitude)
+        return '%s %s (%.3f,%.3f) %.1f km M%.1f' % tpl
+
+    @property
+    def properties(self):
+        """List of summary event properties.
+        Returns:
+            list: List of summary event properties (retrievable
+                  from object with [] operator).
+        """
+        return list(self._jdict['properties'].keys())
+
+    def hasProduct(self, product):
+        """Test to see whether a given product exists for this event.
+        Args:
+            product (str): Product to search for.
+        Returns:
+            bool: Indicates whether that product exists or not.
+        """
+        if product not in self._jdict['properties']['types'].split(',')[1:]:
+            return False
+        return True
+
+    def hasProperty(self, key):
+        """Test to see if property is present in list of properties.
+        Args:
+            key (str): Property to search for.
+        Returns:
+          bool: Indicates whether that key exists or not.
+        """
+        if key not in self._jdict['properties']:
+            return False
+        return True
+
+    def __getitem__(self, key):
+        """Extract SummaryEvent property using the [] operator.
+        Args:
+            key (str): Property to extract.
+        Returns:
+            str: Desired property.
+        """
+        if key not in self._jdict['properties']:
+            raise AttributeError(
+                'No property %s found for event %s.' % (key, self.id))
+        return self._jdict['properties'][key]
+
+    def getDetailURL(self):
+        """Instantiate a DetailEvent object from the URL found in the summary.
+        Returns:
+            str: URL for detailed version of event.
+        """
+        durl = self._jdict['properties']['detail']
+        return durl
+
+    def getDetailEvent(self, includedeleted=False, includesuperseded=False):
+        """Instantiate a DetailEvent object from the URL found in the summary.
+        Args:
+            includedeleted (bool): Boolean indicating wheather to return
+                versions of products that have
+                been deleted. Cannot be used with
+                includesuperseded.
+            includesuperseded (bool):
+                Boolean indicating wheather to return versions of products
+                that have been replaced by newer versions.
+                Cannot be used with includedeleted.
+        Returns:
+            DetailEvent: Detailed version of SummaryEvent.
+        """
+        if includesuperseded and includedeleted:
+            msg = ('includedeleted and includesuperseded '
+                   'cannot be used together.')
+            raise RuntimeError(msg)
+        if not includedeleted and not includesuperseded:
+            durl = self._jdict['properties']['detail']
+            return DetailEvent(durl)
+        else:
+            true_false = {True: 'true', False: 'false'}
+            deleted = true_false[includedeleted]
+            superseded = true_false[includesuperseded]
+            url = SEARCH_DETAIL_TEMPLATE % (self.id, superseded, deleted)
+            return DetailEvent(url)
+
+    def toDict(self):
+        """Render the SummaryEvent origin information as an OrderedDict().
+        Returns:
+            dict: Containing fields:
+               - id (string) Authoritative ComCat event ID.
+               - time (datetime) Authoritative event origin time.
+               - latitude (float) Authoritative event latitude.
+               - longitude (float) Authoritative event longitude.
+               - depth (float) Authoritative event depth.
+               - magnitude (float) Authoritative event magnitude.
+        """
+        edict = OrderedDict()
+        edict['id'] = self.id
+        edict['time'] = self.time
+        edict['location'] = self.location
+        edict['latitude'] = self.latitude
+        edict['longitude'] = self.longitude
+        edict['depth'] = self.depth
+        edict['magnitude'] = self.magnitude
+        edict['url'] = self.url
+        return edict
+
+class DetailEvent(object):
+    """Wrapper around detailed event as returned by ComCat GeoJSON search results.
+    """
+
+    def __init__(self, url):
+        """Instantiate a DetailEvent object with a url pointing to detailed GeoJSON.
+        See detailed documentation here:
+        https://earthquake.usgs.gov/earthquakes/feed/v1.0/geojson_detail.php
+        Args:
+            url (str): String indicating a URL pointing to a detailed GeoJSON event.
+        """
+        try:
+            fh = request.urlopen(url, timeout=TIMEOUT)
+            data = fh.read().decode('utf-8')
+            fh.close()
+            self._jdict = json.loads(data)
+        except HTTPError:
+            try:
+                fh = request.urlopen(url, timeout=TIMEOUT)
+                data = fh.read().decode('utf-8')
+                fh.close()
+                self._jdict = json.loads(data)
+            except Exception as msg:
+                raise Exception('Could not connect to ComCat server - %s.' %
+                                url).with_traceback(msg.__traceback__)
+
+    def __repr__(self):
+        tpl = (self.id, str(self.time), self.latitude,
+               self.longitude, self.depth, self.magnitude)
+        return '%s %s (%.3f,%.3f) %.1f km M%.1f' % tpl
+
+    @property
+    def location(self):
+        """Earthquake location string.
+        Returns:
+            str: Earthquake location.
+        """
+        return self._jdict['properties']['place']
+
+    @property
+    def url(self):
+        """ComCat URL.
+        Returns:
+            str: Earthquake URL.
+        """
+        return self._jdict['properties']['url']
+
+    @property
+    def detail_url(self):
+        """ComCat Detailed URL (with JSON).
+        Returns:
+            str: Earthquake Detailed URL with JSON.
+        """
+        url = URL_TEMPLATE.replace('[EVENTID]', self.id)
+        return url
+
+    @property
+    def latitude(self):
+        """Authoritative origin latitude.
+        Returns:
+            float: Authoritative origin latitude.
+        """
+        return self._jdict['geometry']['coordinates'][1]
+
+    @property
+    def longitude(self):
+        """Authoritative origin longitude.
+        Returns:
+            float: Authoritative origin longitude.
+        """
+        return self._jdict['geometry']['coordinates'][0]
+
+    @property
+    def depth(self):
+        """Authoritative origin depth.
+        """
+        return self._jdict['geometry']['coordinates'][2]
+
+    @property
+    def id(self):
+        """Authoritative origin ID.
+        Returns:
+            str: Authoritative origin ID.
+        """
+        return self._jdict['id']
+
+    @property
+    def time(self):
+        """Authoritative origin time.
+        Returns:
+            datetime: Authoritative origin time.
+        """
+        time_in_msec = self._jdict['properties']['time']
+        time_in_sec = time_in_msec // 1000
+        msec = time_in_msec - (time_in_sec * 1000)
+        dtime = datetime.utcfromtimestamp(time_in_sec)
+        dt = timedelta(milliseconds=msec)
+        dtime = dtime + dt
+        return dtime
+
+    @property
+    def magnitude(self):
+        """Authoritative origin magnitude.
+        Returns:
+            float: Authoritative origin magnitude.
+        """
+        return self._jdict['properties']['mag']
+
+    @property
+    def magtype(self):
+        return self._jdict['properties']['magType']
+
+    @property
+    def properties(self):
+        """List of detail event properties.
+        Returns:
+            list: List of summary event properties (retrievable from object with [] operator).
+        """
+        return list(self._jdict['properties'].keys())
+
+    @property
+    def products(self):
+        """List of detail event properties.
+        Returns:
+            list: List of detail event products (retrievable from object with
+                getProducts() method).
+        """
+        return list(self._jdict['properties']['products'].keys())
+
+    def hasProduct(self, product):
+        """Return a boolean indicating whether given product can be extracted from DetailEvent.
+        Args:
+            product (str): Product to search for.
+        Returns:
+            bool: Indicates whether that product exists or not.
+        """
+        if product in self._jdict['properties']['products']:
+            return True
+        return False
+
+    def hasProperty(self, key):
+        """Test to see whether a property with a given key is present in list of properties.
+        Args:
+            key (str): Property to search for.
+        Returns:
+            bool: Indicates whether that key exists or not.
+        """
+        if key not in self._jdict['properties']:
+            return False
+        return True
+
+    def __getitem__(self, key):
+        """Extract DetailEvent property using the [] operator.
+        Args:
+            key (str): Property to extract.
+        Returns:
+            str: Desired property.
+        """
+        if key not in self._jdict['properties']:
+            raise AttributeError(
+                'No property %s found for event %s.' % (key, self.id))
+        return self._jdict['properties'][key]
+
+    def toDict(self, catalog=None,
+               get_tensors='preferred',
+               get_moment_supplement=False,
+               get_all_magnitudes=False,
+               get_focals='preferred'):
+        """Return origin, focal mechanism, and tensor information for a DetailEvent.
+        Args:
+            catalog (str): Retrieve the primary event information (time,lat,lon...) from the
+                catalog given. If no source for this information exists, an
+                AttributeError will be raised.
+            get_tensors (str): Option of 'none', 'preferred', or 'all'.
+            get_moment_supplement (bool): Boolean indicating whether derived origin and
+                double-couple/source time information should be extracted
+                (when available.)
+            get_focals (str): String option of 'none', 'preferred', or 'all'.
+        Returns:
+            dict: OrderedDict with the same fields as returned by
+                SummaryEvent.toDict(), *preferred* moment tensor and focal
+                mechanism data.  If all magnitudes are requested, then
+                those will be returned as well. Generally speaking, the
+                number and name of the fields will vary by what data is available.
+        """
+        edict = OrderedDict()
+
+        if catalog is None:
+            edict['id'] = self.id
+            edict['time'] = self.time
+            edict['location'] = self.location
+            edict['latitude'] = self.latitude
+            edict['longitude'] = self.longitude
+            edict['depth'] = self.depth
+            edict['magnitude'] = self.magnitude
+            edict['magtype'] = self._jdict['properties']['magType']
+            edict['url'] = self.url
+        else:
+            try:
+                phase_sources = []
+                origin_sources = []
+                if self.hasProduct('phase-data'):
+                    phase_sources = [p.source for p in self.getProducts(
+                        'phase-data', source='all')]
+                if self.hasProduct('origin'):
+                    origin_sources = [
+                        o.source for o in self.getProducts('origin',
+                                                           source='all')]
+                if catalog in phase_sources:
+                    phasedata = self.getProducts(
+                        'phase-data', source=catalog)[0]
+                elif catalog in origin_sources:
+                    phasedata = self.getProducts('origin', source=catalog)[0]
+                else:
+                    msg = ('DetailEvent %s has no phase-data or origin '
+                           'products for source %s')
+                    raise AttributeError(msg % (self.id, catalog))
+                edict['id'] = phasedata['eventsource'] + \
+                    phasedata['eventsourcecode']
+                edict['time'] = dateutil.parser.parse(phasedata['eventtime'])
+                edict['location'] = self.location
+                edict['latitude'] = float(phasedata['latitude'])
+                edict['longitude'] = float(phasedata['longitude'])
+                edict['depth'] = float(phasedata['depth'])
+                edict['magnitude'] = float(phasedata['magnitude'])
+                edict['magtype'] = phasedata['magnitude-type']
+            except AttributeError as ae:
+                raise ae
+
+        if get_tensors == 'all':
+            if self.hasProduct('moment-tensor'):
+                tensors = self.getProducts(
+                    'moment-tensor', source='all', version=VersionOption.ALL)
+                for tensor in tensors:
+                    supp = get_moment_supplement
+                    tdict = _get_moment_tensor_info(tensor,
+                                                    get_angles=True,
+                                                    get_moment_supplement=supp)
+                    edict.update(tdict)
+
+        if get_tensors == 'preferred':
+            if self.hasProduct('moment-tensor'):
+                tensor = self.getProducts('moment-tensor')[0]
+                supp = get_moment_supplement
+                tdict = _get_moment_tensor_info(tensor, get_angles=True,
+                                                get_moment_supplement=supp)
+                edict.update(tdict)
+
+        if get_focals == 'all':
+            if self.hasProduct('focal-mechanism'):
+                focals = self.getProducts(
+                    'focal-mechanism', source='all', version=VersionOption.ALL)
+                for focal in focals:
+                    edict.update(_get_focal_mechanism_info(focal))
+
+        if get_focals == 'preferred':
+            if self.hasProduct('focal-mechanism'):
+                focal = self.getProducts('focal-mechanism')[0]
+                edict.update(_get_focal_mechanism_info(focal))
+
+        # dependency on obspy for this function that we might not use
+        if get_all_magnitudes:
+            phase_data = self.getProducts('phase-data')[0]
+            phase_url = phase_data.getContentURL('quakeml.xml')
+            catalog = read_events(phase_url)
+            event = catalog.events[0]
+            imag = 1
+            for magnitude in event.magnitudes:
+                edict['magnitude%i' % imag] = magnitude.mag
+                edict['magtype%i' %
+                      imag] = magnitude.magnitude_type
+                imag += 1
+
+        return edict
+
+    def getNumVersions(self, product_name):
+        """Count versions of a product (origin, shakemap, etc.) available.
+        Args:
+            product_name (str): Name of product to query.
+        Returns:
+            int: Number of versions of a given product.
+        """
+        if not self.hasProduct(product_name):
+            raise AttributeError(
+                'Event %s has no product of type %s' % (self.id, product_name))
+        return len(self._jdict['properties']['products'][product_name])
+
+    def getProducts(self, product_name, source='preferred',
+                    version=VersionOption.PREFERRED):
+        """Retrieve a Product object from this DetailEvent.
+        Args:
+            product_name (str): Name of product (origin, shakemap, etc.) to retrieve.
+            version (enum): A value from VersionOption (PREFERRED,FIRST,ALL).
+            source (str): Any one of:
+                - 'preferred' Get version(s) of products from preferred source.
+                - 'all' Get version(s) of products from all sources.
+                - Any valid source network for this type of product
+                  ('us','ak',etc.)
+        Returns:
+          list: List of Product objects.
+        """
+        if not self.hasProduct(product_name):
+            raise AttributeError(
+                'Event %s has no product of type %s' % (self.id, product_name))
+
+        products = self._jdict['properties']['products'][product_name]
+        weights = [product['preferredWeight'] for product in products]
+        sources = [product['source'] for product in products]
+        times = [product['updateTime'] for product in products]
+        indices = list(range(0, len(times)))
+        df = pd.DataFrame(
+            {'weight': weights, 'source': sources,
+             'time': times, 'index': indices})
+        # we need to add a version number column here, ordinal
+        # sorted by update time, starting at 1
+        # for each unique source.
+        # first sort the dataframe by source and then time
+        df = df.sort_values(['source', 'time'])
+        df['version'] = 0
+        psources = []
+        pversion = 1
+        for idx, row in df.iterrows():
+            if row['source'] not in psources:
+                psources.append(row['source'])
+                pversion = 1
+            df.loc[idx, 'version'] = pversion
+            pversion += 1
+
+        if source == 'preferred':
+            idx = weights.index(max(weights))
+            tproduct = self._jdict['properties']['products'][product_name][idx]
+            prefsource = tproduct['source']
+            df = df[df['source'] == prefsource]
+            df = df.sort_values('time')
+        elif source == 'all':
+            df = df.sort_values(['source', 'time'])
+        else:
+            df = df[df['source'] == source]
+            df = df.sort_values('time')
+
+        # if we don't have any versions of products, raise an exception
+        if not len(df):
+            raise AttributeError('No products found for source "%s".' % source)
+
+        products = []
+        usources = set(sources)
+        tproducts = self._jdict['properties']['products'][product_name]
+        if source == 'all':  # dataframe includes all sources
+            for source in usources:
+                df_source = df[df['source'] == source]
+                df_source = df_source.sort_values('time')
+                if version == VersionOption.PREFERRED:
+                    df_source = df_source.sort_values(['weight', 'time'])
+                    idx = df_source.iloc[-1]['index']
+                    pversion = df_source.iloc[-1]['version']
+                    product = Product(product_name, pversion, tproducts[idx])
+                    products.append(product)
+                elif version == VersionOption.LAST:
+                    idx = df_source.iloc[-1]['index']
+                    pversion = df_source.iloc[-1]['version']
+                    product = Product(product_name, pversion, tproducts[idx])
+                    products.append(product)
+                elif version == VersionOption.FIRST:
+                    idx = df_source.iloc[0]['index']
+                    pversion = df_source.iloc[0]['version']
+                    product = Product(product_name, pversion, tproducts[idx])
+                    products.append(product)
+                elif version == VersionOption.ALL:
+                    for idx, row in df_source.iterrows():
+                        idx = row['index']
+                        pversion = row['version']
+                        product = Product(
+                            product_name, pversion, tproducts[idx])
+                        products.append(product)
+                else:
+                    raise(AttributeError(
+                        'No VersionOption defined for %s' % version))
+        else:  # dataframe only includes one source
+            if version == VersionOption.PREFERRED:
+                df = df.sort_values(['weight', 'time'])
+                idx = df.iloc[-1]['index']
+                pversion = df.iloc[-1]['version']
+                product = Product(
+                    product_name, pversion, tproducts[idx])
+                products.append(product)
+            elif version == VersionOption.LAST:
+                idx = df.iloc[-1]['index']
+                pversion = df.iloc[-1]['version']
+                product = Product(
+                    product_name, pversion, tproducts[idx])
+                products.append(product)
+            elif version == VersionOption.FIRST:
+                idx = df.iloc[0]['index']
+                pversion = df.iloc[0]['version']
+                product = Product(
+                    product_name, pversion, tproducts[idx])
+                products.append(product)
+            elif version == VersionOption.ALL:
+                for idx, row in df.iterrows():
+                    idx = row['index']
+                    pversion = row['version']
+                    product = Product(
+                        product_name, pversion, tproducts[idx])
+                    products.append(product)
+            else:
+                msg = 'No VersionOption defined for %s' % version
+                raise(AttributeError(msg))
+
+        return products
+
+class Product(object):
+    """Class describing a Product from detailed GeoJSON feed.
+    """
+
+    def __init__(self, product_name, version, product):
+        """Create a product class from product in detailed GeoJSON.
+        Args:
+            product_name (str): Name of Product (origin, shakemap, etc.)
+            version (int): Best guess as to ordinal version of the product.
+            product (dict): Product data to be copied from DetailEvent.
+        """
+        self._product_name = product_name
+        self._version = version
+        self._product = product.copy()
+
+    def getContentsMatching(self, regexp):
+        """Find all contents that match the input regex, shortest to longest.
+        Args:
+            regexp (str): Regular expression which should match one of the content files
+                in the Product.
+        Returns:
+            list: List of contents matching input regex.
+        """
+        contents = []
+        if not len(self._product['contents']):
+            return contents
+
+        for contentkey in self._product['contents'].keys():
+            url = self._product['contents'][contentkey]['url']
+            parts = urlparse(url)
+            fname = parts.path.split('/')[-1]
+            if re.search(regexp + '$', fname):
+                contents.append(fname)
+        return contents
+
+    def __repr__(self):
+        ncontents = len(self._product['contents'])
+        tpl = (self._product_name, self.source, self.update_time, ncontents)
+        return ('Product %s from %s updated %s '
+                'containing %i content files.' % tpl)
+
+    def getContentName(self, regexp):
+        """Get the shortest filename matching input regular expression.
+        For example, if the shakemap product has contents called
+        grid.xml and grid.xml.zip, and the input regexp is grid.xml,
+        then grid.xml will be matched.
+        Args:
+            regexp (str): Regular expression to use to search for matching contents.
+        Returns:
+            str: Shortest file name to match input regexp, or None if
+                 no matches found.
+        """
+        content_name = 'a' * 1000
+        found = False
+        for contentkey, content in self._product['contents'].items():
+            if re.search(regexp + '$', contentkey) is None:
+                continue
+            url = content['url']
+            parts = urlparse(url)
+            fname = parts.path.split('/')[-1]
+            if len(fname) < len(content_name):
+                content_name = fname
+                found = True
+        if found:
+            return content_name
+        else:
+            return None
+
+    def getContentURL(self, regexp):
+        """Get the URL for the shortest filename matching input regular expression.
+        For example, if the shakemap product has contents called grid.xml and
+        grid.xml.zip, and the input regexp is grid.xml, then grid.xml will be
+        matched.
+        Args:
+            regexp (str): Regular expression to use to search for matching contents.
+        Returns:
+            str: URL for shortest file name to match input regexp, or
+                 None if no matches found.
+        """
+        content_name = 'a' * 1000
+        found = False
+        content_url = ''
+        for contentkey, content in self._product['contents'].items():
+            if re.search(regexp + '$', contentkey) is None:
+                continue
+            url = content['url']
+            parts = urlparse(url)
+            fname = parts.path.split('/')[-1]
+            if len(fname) < len(content_name):
+                content_name = fname
+                content_url = url
+                found = True
+        if found:
+            return content_url
+        else:
+            return None
+
+    def getContent(self, regexp, filename):
+        """Download the shortest file name matching the input regular expression.
+        Args:
+            regexp (str): Regular expression which should match one of the
+                content files
+                in the Product.
+        filename (str): Filename to which content should be downloaded.
+        Returns:
+            str: The URL from which the content was downloaded.
+        Raises:
+          Exception: If content could not be downloaded from ComCat
+              after two tries.
+        """
+        data, url = self.getContentBytes(regexp)
+
+        f = open(filename, 'wb')
+        f.write(data)
+        f.close()
+
+        return url
+
+    def getContentBytes(self, regexp):
+        """Return bytes of shortest file name matching input regular expression.
+        Args:
+            regexp (str): Regular expression which should match one of the
+                content files in
+                the Product.
+        Returns:
+            tuple: (array of bytes containing file contents, source url)
+                Bytes can be decoded to UTF-8 by the user if file contents are known
+                to be ASCII.  i.e.,
+                product.getContentBytes('info.json').decode('utf-8')
+        Raises:
+            Exception: If content could not be downloaded from ComCat
+                after two tries.
+        """
+        content_name = 'a' * 1000
+        content_url = None
+        for contentkey, content in self._product['contents'].items():
+            if re.search(regexp + '$', contentkey) is None:
+                continue
+            url = content['url']
+            parts = urlparse(url)
+            fname = parts.path.split('/')[-1]
+            if len(fname) < len(content_name):
+                content_name = fname
+                content_url = url
+        if content_url is None:
+            raise AttributeError(
+                'Could not find any content matching input %s' % regexp)
+
+        try:
+            fh = request.urlopen(url, timeout=TIMEOUT)
+            data = fh.read()
+            fh.close()
+
+        except HTTPError:
+            time.sleep(WAITSECS)
+            try:
+                fh = request.urlopen(url, timeout=TIMEOUT)
+                data = fh.read()
+                fh.close()
+            except Exception:
+                raise Exception('Could not download %s from %s.' %
+                                (content_name, url))
+
+        return (data, url)
+
+    def hasProperty(self, key):
+        """Determine if this Product contains a given property.
+        Args:
+            key (str): Property to search for.
+        Returns:
+            bool: Indicates whether that key exists or not.
+        """
+        if key not in self._product['properties']:
+            return False
+        return True
+
+    @property
+    def preferred_weight(self):
+        """The weight assigned to this product by ComCat.
+        Returns:
+            float: weight assigned to this product by ComCat.
+        """
+        return self._product['preferredWeight']
+
+    @property
+    def source(self):
+        """The contributing source for this product.
+        Returns:
+            str: contributing source for this product.
+        """
+        return self._product['source']
+
+    @property
+    def product_timestamp(self):
+        """The timestamp for this product.
+        Returns:
+            int: The timestamp for this product (effectively used as
+                version number by ComCat).
+        """
+        time_in_msec = self._product['updateTime']
+        return time_in_msec
+
+    @property
+    def update_time(self):
+        """The datetime for when this product was updated.
+        Returns:
+            datetime: datetime for when this product was updated.
+        """
+        time_in_msec = self._product['updateTime']
+        time_in_sec = time_in_msec // 1000
+        msec = time_in_msec - (time_in_sec * 1000)
+        dtime = datetime.utcfromtimestamp(time_in_sec)
+        dt = timedelta(milliseconds=msec)
+        dtime = dtime + dt
+        return dtime
+
+    @property
+    def version(self):
+        """The best guess for the ordinal version number of this product.
+        Returns:
+            int: best guess for the ordinal version number of this product.
+        """
+        return self._version
+
+    @property
+    def properties(self):
+        """List of product properties.
+        Returns:
+            list: List of product properties (retrievable from object with [] operator).
+        """
+        return list(self._product['properties'].keys())
+
+    @property
+    def contents(self):
+        """List of product properties.
+        Returns:
+            list: List of product properties (retrievable with getContent() method).
+        """
+        return list(self._product['contents'].keys())
+
+    def __getitem__(self, key):
+        """Extract Product property using the [] operator.
+        Args:
+            key (str): Property to extract.
+        Returns:
+            str: Desired property.
+        """
+        if key not in self._product['properties']:
+            msg = 'No property %s found in %s product.' % (
+                key, self._product_name)
+            raise AttributeError(msg)
+        return self._product['properties'][key]
+
+def _get_moment_tensor_info(tensor, get_angles=False,
+                            get_moment_supplement=False):
+    """Internal - gather up tensor components and focal mechanism angles.
+    """
+    msource = tensor['eventsource']
+    if tensor.hasProperty('derived-magnitude-type'):
+        msource += '_' + tensor['derived-magnitude-type']
+    elif tensor.hasProperty('beachball-type'):
+        btype = tensor['beachball-type']
+        if btype.find('/') > -1:
+            btype = btype.split('/')[-1]
+        msource += '_' + btype
+
+    edict = OrderedDict()
+    edict['%s_mrr' % msource] = float(tensor['tensor-mrr'])
+    edict['%s_mtt' % msource] = float(tensor['tensor-mtt'])
+    edict['%s_mpp' % msource] = float(tensor['tensor-mpp'])
+    edict['%s_mrt' % msource] = float(tensor['tensor-mrt'])
+    edict['%s_mrp' % msource] = float(tensor['tensor-mrp'])
+    edict['%s_mtp' % msource] = float(tensor['tensor-mtp'])
+    if get_angles and tensor.hasProperty('nodal-plane-1-strike'):
+        edict['%s_np1_strike' % msource] = tensor['nodal-plane-1-strike']
+        edict['%s_np1_dip' % msource] = tensor['nodal-plane-1-dip']
+        if tensor.hasProperty('nodal-plane-1-rake'):
+            edict['%s_np1_rake' % msource] = tensor['nodal-plane-1-rake']
+        else:
+            edict['%s_np1_rake' % msource] = tensor['nodal-plane-1-slip']
+        edict['%s_np2_strike' % msource] = tensor['nodal-plane-2-strike']
+        edict['%s_np2_dip' % msource] = tensor['nodal-plane-2-dip']
+        if tensor.hasProperty('nodal-plane-2-rake'):
+            edict['%s_np2_rake' % msource] = tensor['nodal-plane-2-rake']
+        else:
+            edict['%s_np2_rake' % msource] = tensor['nodal-plane-2-slip']
+
+    if get_moment_supplement:
+        if tensor.hasProperty('derived-latitude'):
+            edict['%s_derived_latitude' % msource] = float(
+                tensor['derived-latitude'])
+            edict['%s_derived_longitude' % msource] = float(
+                tensor['derived-longitude'])
+            edict['%s_derived_depth' % msource] = float(
+                tensor['derived-depth'])
+        if tensor.hasProperty('percent-double-couple'):
+            edict['%s_percent_double_couple' % msource] = float(
+                tensor['percent-double-couple'])
+        if tensor.hasProperty('sourcetime-duration'):
+            edict['%s_sourcetime_duration' % msource] = float(
+                tensor['sourcetime-duration'])
+
+    return edict
+
+def _get_focal_mechanism_info(focal):
+    """Internal - gather up focal mechanism angles.
+    """
+    msource = focal['eventsource']
+    eventid = msource + focal['eventsourcecode']
+    edict = OrderedDict()
+    try:
+        edict['%s_np1_strike' % msource] = focal['nodal-plane-1-strike']
+    except Exception:
+        sys.stderr.write(
+            'No focal angles for %s in detailed geojson.\n' % eventid)
+        return edict
+    edict['%s_np1_dip' % msource] = focal['nodal-plane-1-dip']
+    if focal.hasProperty('nodal-plane-1-rake'):
+        edict['%s_np1_rake' % msource] = focal['nodal-plane-1-rake']
+    else:
+        edict['%s_np1_rake' % msource] = focal['nodal-plane-1-slip']
+    edict['%s_np2_strike' % msource] = focal['nodal-plane-2-strike']
+    edict['%s_np2_dip' % msource] = focal['nodal-plane-2-dip']
+    if focal.hasProperty('nodal-plane-2-rake'):
+        edict['%s_np2_rake' % msource] = focal['nodal-plane-2-rake']
+    else:
+        edict['%s_np2_rake' % msource] = focal['nodal-plane-2-slip']
+    return edict
+
+
+[docs] +def get_event_by_id(eventid, catalog=None, + includedeleted=False, + includesuperseded=False, + host=None): + """Search the ComCat database for an event matching the input event id. + This search function is a wrapper around the ComCat Web API described here: + https://earthquake.usgs.gov/fdsnws/event/1/ + Some of the search parameters described there are NOT implemented here, usually because they do not + apply to GeoJSON search results, which we are getting here and parsing into Python data structures. + This function returns a DetailEvent object, described elsewhere in this package. + Usage: + + Args: + eventid (str): Select a specific event by ID; event identifiers are data center specific. + includesuperseded (bool): + Specify if superseded products should be included. This also includes all + deleted products, and is mutually exclusive to the includedeleted parameter. + includedeleted (bool): Specify if deleted products should be incuded. + host (str): Replace default ComCat host (earthquake.usgs.gov) with a custom host. + Returns: DetailEvent object. + """ + # getting the inputargs must be the first line of the method! + inputargs = locals().copy() + newargs = {} + for key, value in inputargs.items(): + if value is True: + newargs[key] = 'true' + continue + if value is False: + newargs[key] = 'false' + continue + if value is None: + continue + newargs[key] = value + + event = _search(**newargs) # this should be a DetailEvent + return event
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/csep/utils/plots.html b/_modules/csep/utils/plots.html new file mode 100644 index 00000000..555e20a3 --- /dev/null +++ b/_modules/csep/utils/plots.html @@ -0,0 +1,2993 @@ + + + + + + + + csep.utils.plots — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep.utils.plots

+import time
+
+# Third-party imports
+import numpy
+import string
+import pandas as pandas
+from scipy.integrate import cumulative_trapezoid
+import scipy.stats
+import matplotlib
+import matplotlib.lines
+from matplotlib import cm
+from matplotlib.collections import PatchCollection
+import matplotlib.pyplot as pyplot
+import cartopy
+import cartopy.crs as ccrs
+from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
+from cartopy.io import img_tiles
+# PyCSEP imports
+from csep.utils.constants import SECONDS_PER_DAY, CSEP_MW_BINS
+from csep.utils.calc import bin1d_vec
+from csep.utils.time_utils import datetime_to_utc_epoch
+
+"""
+This module contains plotting routines that generate figures for the stochastic
+event sets produced from CSEP2 experiments and Poisson based CSEP1 experiments.
+
+Right now functions dont have consistent signatures, which will be addressed in
+future releases. That means that some functions might have more functionality 
+than others while the routines are being developed.
+
+TODO: Add annotations for other two plots.
+TODO: Add ability to plot annotations from multiple catalogs. 
+        Esp, for plot_histogram()
+IDEA: Same concept mentioned in evaluations might apply here. The plots could 
+      be a common class that might provide more control to the end user.
+IDEA: Since plotting functions are usable by these classes only that don't 
+      implement iter routines, maybe make them a class method. like 
+      data.plot_thing()
+"""
+
+
+def plot_cumulative_events_versus_time_dev(xdata, ydata, obs_data,
+                                           plot_args, show=False):
+    """
+
+
+    Args:
+        xdata (ndarray): time bins for plotting shape (N,)
+        ydata (ndarray or list like): ydata for plotting; shape (N,5) in order 2.5%Per, 25%Per, 50%Per, 75%Per, 97.5%Per
+        obs_data (ndarry): same shape as xdata
+        plot_args:
+        show:
+
+    Returns:
+
+    """
+    figsize = plot_args.get('figsize', None)
+    sim_label = plot_args.get('sim_label', 'Simulated')
+    obs_label = plot_args.get('obs_label', 'Observation')
+    legend_loc = plot_args.get('legend_loc', 'best')
+    title = plot_args.get('title', 'Cumulative Event Counts')
+    xlabel = plot_args.get('xlabel', 'Days')
+
+    fig, ax = pyplot.subplots(figsize=figsize)
+    try:
+        fifth_per = ydata[0, :]
+        first_quar = ydata[1, :]
+        med_counts = ydata[2, :]
+        second_quar = ydata[3, :]
+        nine_fifth = ydata[4, :]
+    except:
+        raise TypeError("ydata must be a [N,5] ndarray.")
+    # plotting
+
+    ax.plot(xdata, obs_data, color='black', label=obs_label)
+    ax.plot(xdata, med_counts, color='red', label=sim_label)
+    ax.fill_between(xdata, fifth_per, nine_fifth, color='red', alpha=0.2,
+                    label='5%-95%')
+    ax.fill_between(xdata, first_quar, second_quar, color='red', alpha=0.5,
+                    label='25%-75%')
+    ax.legend(loc=legend_loc)
+    ax.set_xlabel(xlabel)
+    ax.set_ylabel('Cumulative event count')
+    ax.set_title(title)
+    # pyplot.subplots_adjust(right=0.75)
+    # annotate the plot with information from data
+    # ax.annotate(str(observation), xycoords='axes fraction', xy=xycoords, fontsize=10, annotation_clip=False)
+    # save figure
+    filename = plot_args.get('filename', None)
+    if filename is not None:
+        fig.savefig(filename + '.pdf')
+        fig.savefig(filename + '.png', dpi=300)
+    # optionally show figure
+    if show:
+        pyplot.show()
+
+    return ax
+
+
+
+[docs] +def plot_cumulative_events_versus_time(stochastic_event_sets, observation, + show=False, plot_args=None): + """ + Same as below but performs the statistics on numpy arrays without using pandas data frames. + + Args: + stochastic_event_sets: + observation: + show: + plot_args: + + Returns: + ax: matplotlib.Axes + """ + plot_args = plot_args or {} + print('Plotting cumulative event counts.') + figsize = plot_args.get('figsize', None) + fig, ax = pyplot.subplots(figsize=figsize) + # get global information from stochastic event set + t0 = time.time() + n_cat = len(stochastic_event_sets) + + extreme_times = [] + for ses in stochastic_event_sets: + start_epoch = datetime_to_utc_epoch(ses.start_time) + end_epoch = datetime_to_utc_epoch(ses.end_time) + if start_epoch == None or end_epoch == None: + continue + + extreme_times.append((start_epoch, end_epoch)) + + # offsets to start at 0 time and converts from millis to hours + time_bins, dt = numpy.linspace(numpy.min(extreme_times), + numpy.max(extreme_times), 100, + endpoint=True, retstep=True) + n_bins = time_bins.shape[0] + binned_counts = numpy.zeros((n_cat, n_bins)) + for i, ses in enumerate(stochastic_event_sets): + n_events = ses.data.shape[0] + ses_origin_time = ses.get_epoch_times() + inds = bin1d_vec(ses_origin_time, time_bins) + for j in range(n_events): + binned_counts[i, inds[j]] += 1 + if (i + 1) % 1500 == 0: + t1 = time.time() + print(f"Processed {i + 1} catalogs in {t1 - t0} seconds.") + t1 = time.time() + print(f'Collected binned counts in {t1 - t0} seconds.') + summed_counts = numpy.cumsum(binned_counts, axis=1) + + # compute summary statistics for plotting + fifth_per = numpy.percentile(summed_counts, 5, axis=0) + first_quar = numpy.percentile(summed_counts, 25, axis=0) + med_counts = numpy.percentile(summed_counts, 50, axis=0) + second_quar = numpy.percentile(summed_counts, 75, axis=0) + nine_fifth = numpy.percentile(summed_counts, 95, axis=0) + # compute median for comcat data + obs_binned_counts = numpy.zeros(n_bins) + inds = bin1d_vec(observation.get_epoch_times(), time_bins) + for j in range(observation.event_count): + obs_binned_counts[inds[j]] += 1 + obs_summed_counts = numpy.cumsum(obs_binned_counts) + + # update time_bins for plotting + millis_to_hours = 60 * 60 * 1000 * 24 + time_bins = (time_bins - time_bins[0]) / millis_to_hours + time_bins = time_bins + (dt / millis_to_hours) + # make all arrays start at zero + time_bins = numpy.insert(time_bins, 0, 0) + fifth_per = numpy.insert(fifth_per, 0, 0) + first_quar = numpy.insert(first_quar, 0, 0) + med_counts = numpy.insert(med_counts, 0, 0) + second_quar = numpy.insert(second_quar, 0, 0) + nine_fifth = numpy.insert(nine_fifth, 0, 0) + obs_summed_counts = numpy.insert(obs_summed_counts, 0, 0) + + # get values from plotting args + sim_label = plot_args.get('sim_label', 'Simulated') + obs_label = plot_args.get('obs_label', 'Observation') + xycoords = plot_args.get('xycoords', (1.00, 0.40)) + legend_loc = plot_args.get('legend_loc', 'best') + title = plot_args.get('title', 'Cumulative Event Counts') + # plotting + ax.plot(time_bins, obs_summed_counts, color='black', label=obs_label) + ax.plot(time_bins, med_counts, color='red', label=sim_label) + ax.fill_between(time_bins, fifth_per, nine_fifth, color='red', alpha=0.2, + label='5%-95%') + ax.fill_between(time_bins, first_quar, second_quar, color='red', alpha=0.5, + label='25%-75%') + ax.legend(loc=legend_loc) + ax.set_xlabel('Days since Mainshock') + ax.set_ylabel('Cumulative Event Count') + ax.set_title(title) + pyplot.subplots_adjust(right=0.75) + # annotate the plot with information from data + # ax.annotate(str(observation), xycoords='axes fraction', xy=xycoords, fontsize=10, annotation_clip=False) + # save figure + filename = plot_args.get('filename', None) + if filename is not None: + fig.savefig(filename + '.pdf') + fig.savefig(filename + '.png', dpi=300) + # optionally show figure + if show: + pyplot.show() + + return ax
+ + + +
+[docs] +def plot_magnitude_versus_time(catalog, filename=None, show=False, + reset_times=False, plot_args=None, **kwargs): + """ + Plots magnitude versus linear time for an earthquake data. + + Catalog class must implement get_magnitudes() and get_datetimes() in order for this function to work correctly. + + Args: + catalog (:class:`~csep.core.catalogs.AbstractBaseCatalog`): data to visualize + + Returns: + (tuple): fig and axes handle + """ + # get values from plotting args + plot_args = plot_args or {} + title = plot_args.get('title', '') + marker_size = plot_args.get('marker_size', 10) + color = plot_args.get('color', 'blue') + c = plot_args.get('c', None) + clabel = plot_args.get('clabel', None) + + print('Plotting magnitude versus time.') + fig = pyplot.figure(figsize=(8, 3)) + ax = fig.add_subplot(111) + + # get time in days + # plotting timestamps for now, until I can format dates on axis properly + f = lambda x: numpy.array(x.timestamp()) / SECONDS_PER_DAY + + # map returns a generator function which we collapse with list + days_elapsed = numpy.array(list(map(f, catalog.get_datetimes()))) + + if reset_times: + days_elapsed = days_elapsed - days_elapsed[0] + + magnitudes = catalog.get_magnitudes() + + # make plot + if c is not None: + h = ax.scatter(days_elapsed, magnitudes, marker='.', s=marker_size, + c=c, cmap=cm.get_cmap('jet'), **kwargs) + cbar = fig.colorbar(h) + cbar.set_label(clabel) + else: + ax.scatter(days_elapsed, magnitudes, marker='.', s=marker_size, + color=color, **kwargs) + + # do some labeling of the figure + ax.set_title(title, fontsize=16, color='black') + ax.set_xlabel('Days Elapsed') + ax.set_ylabel('Magnitude') + fig.tight_layout() + + # # annotate the plot with information from data + # if data is not None: + # try: + # ax.annotate(str(data), xycoords='axes fraction', xy=xycoords, fontsize=10, annotation_clip=False) + # except: + # pass + + # handle displaying of figures + if filename is not None: + fig.savefig(filename + '.pdf') + fig.savefig(filename + '.png', dpi=300) + + if show: + pyplot.show() + + return ax
+ + + +
+[docs] +def plot_histogram(simulated, observation, bins='fd', percentile=None, + show=False, axes=None, catalog=None, plot_args=None): + """ + Plots histogram of single statistic for stochastic event sets and observations. The function will behave differently + depending on the inumpyuts. + + Simulated should always be either a list or numpy.array where there would be one value per data in the stochastic event + set. Observation could either be a scalar or a numpy.array/list. If observation is a scale a vertical line would be + plotted, if observation is iterable a second histogram would be plotted. + + This allows for comparisons to be made against catalogs where there are multiple values e.g., magnitude, and single values + e.g., event count. + + If an axis handle is included, additional function calls will only addition extra simulations, observations will not be + plotted. Since this function returns an axes handle, any extra modifications to the figure can be made using that. + + Args: + simulated (numpy.arrays): numpy.array like representation of statistics computed from catalogs. + observation(numpy.array or scalar): observation to plot against stochastic event set + filename (str): filename to save figure + show (bool): show interactive version of the figure + ax (axis object): axis object with interface defined by matplotlib + catalog (csep.AbstractBaseCatalog): used for annotating the figures + plot_args (dict): additional plotting commands. TODO: Documentation + + Returns: + axis: matplolib axes handle + """ + # Plotting + plot_args = plot_args or {} + chained = False + figsize = plot_args.get('figsize', None) + if axes is not None: + chained = True + ax = axes + else: + if catalog: + fig, ax = pyplot.subplots(figsize=figsize) + else: + fig, ax = pyplot.subplots() + + # parse plotting arguments + sim_label = plot_args.get('sim_label', 'Simulated') + obs_label = plot_args.get('obs_label', 'Observation') + xlabel = plot_args.get('xlabel', 'X') + ylabel = plot_args.get('ylabel', 'Frequency') + xycoords = plot_args.get('xycoords', (1.00, 0.40)) + title = plot_args.get('title', None) + legend_loc = plot_args.get('legend_loc', 'best') + legend = plot_args.get('legend', True) + bins = plot_args.get('bins', bins) + color = plot_args.get('color', '') + filename = plot_args.get('filename', None) + xlim = plot_args.get('xlim', None) + + # this could throw an error exposing bad implementation + observation = numpy.array(observation) + + try: + n = len(observation) + except TypeError: + ax.axvline(x=observation, color='black', linestyle='--', + label=obs_label) + + else: + # remove any nan values + observation = observation[~numpy.isnan(observation)] + ax.hist(observation, bins=bins, label=obs_label, edgecolor=None, + linewidth=0) + + # remove any potential nans from arrays + simulated = numpy.array(simulated) + simulated = simulated[~numpy.isnan(simulated)] + + if color: + n, bin_edges, patches = ax.hist(simulated, bins=bins, label=sim_label, + color=color, edgecolor=None, + linewidth=0) + else: + n, bin_edges, patches = ax.hist(simulated, bins=bins, label=sim_label, + edgecolor=None, linewidth=0) + + # color bars for rejection area + if percentile is not None: + inc = (100 - percentile) / 2 + inc_high = 100 - inc + inc_low = inc + p_high = numpy.percentile(simulated, inc_high) + idx_high = numpy.digitize(p_high, bin_edges) + p_low = numpy.percentile(simulated, inc_low) + idx_low = numpy.digitize(p_low, bin_edges) + + # show 99.5% of data + if xlim is None: + upper_xlim = numpy.percentile(simulated, 99.75) + upper_xlim = numpy.max([upper_xlim, numpy.max(observation)]) + d_bin = bin_edges[1] - bin_edges[0] + upper_xlim = upper_xlim + 2 * d_bin + + lower_xlim = numpy.percentile(simulated, 0.25) + lower_xlim = numpy.min([lower_xlim, numpy.min(observation)]) + lower_xlim = lower_xlim - 2 * d_bin + + try: + ax.set_xlim([lower_xlim, upper_xlim]) + except ValueError: + print('Ignoring observation in axis scaling because inf or -inf') + upper_xlim = numpy.percentile(simulated, 99.75) + upper_xlim = upper_xlim + 2 * d_bin + + lower_xlim = numpy.percentile(simulated, 0.25) + lower_xlim = lower_xlim - 2 * d_bin + + ax.set_xlim([lower_xlim, upper_xlim]) + else: + ax.set_xlim(xlim) + + ax.set_title(title) + ax.set_xlabel(xlabel) + ax.set_ylabel(ylabel) + if legend: + ax.legend(loc=legend_loc) + + # hacky workaround for coloring legend, by calling after legend is drawn. + if percentile is not None: + for idx in range(idx_low): + patches[idx].set_fc('red') + for idx in range(idx_high, len(patches)): + patches[idx].set_fc('red') + if filename is not None: + ax.figure.savefig(filename + '.pdf') + ax.figure.savefig(filename + '.png', dpi=300) + if show: + pyplot.show() + return ax
+ + + +
+[docs] +def plot_ecdf(x, ecdf, axes=None, xv=None, show=False, plot_args=None): + """ Plots empirical cumulative distribution function. """ + plot_args = plot_args or {} + # get values from plotting args + sim_label = plot_args.get('sim_label', 'Simulated') + obs_label = plot_args.get('obs_label', 'Observation') + xlabel = plot_args.get('xlabel', 'X') + ylabel = plot_args.get('ylabel', '$P(X \leq x)$') + xycoords = plot_args.get('xycoords', (1.00, 0.40)) + legend_loc = plot_args.get('legend_loc', 'best') + filename = plot_args.get('filename', None) + + # make figure + if axes == None: + fig, ax = pyplot.subplots() + else: + ax = axes + fig = axes.figure + ax.plot(x, ecdf, label=sim_label) + if xv: + ax.axvline(x=xv, color='black', linestyle='--', label=obs_label) + ax.set_xlabel(xlabel) + ax.set_ylabel(ylabel) + ax.legend(loc=legend_loc) + + # if data is not None: + # ax.annotate(str(data), xycoords='axes fraction', xy=xycoords, fontsize=10, annotation_clip=False) + + if filename is not None: + fig.savefig(filename + '.pdf') + fig.savefig(filename + '.png', dpi=300) + + if show: + pyplot.show() + + return ax
+ + + +def plot_magnitude_histogram_dev(ses_data, obs, plot_args, show=False): + bin_edges, obs_hist = obs.magnitude_counts(retbins=True) + n_obs = numpy.sum(obs_hist) + event_counts = numpy.sum(ses_data, axis=1) + # normalize all histograms by counts in each + scale = n_obs / event_counts + # use broadcasting + ses_data = ses_data * scale.reshape(-1, 1) + figsize = plot_args.get('figsize', None) + fig = pyplot.figure(figsize=figsize) + ax = fig.gca() + u3etas_median = numpy.median(ses_data, axis=0) + u3etas_low = numpy.percentile(ses_data, 2.5, axis=0) + u3etas_high = numpy.percentile(ses_data, 97.5, axis=0) + u3etas_min = numpy.min(ses_data, axis=0) + u3etas_max = numpy.max(ses_data, axis=0) + u3etas_emax = u3etas_max - u3etas_median + u3etas_emin = u3etas_median - u3etas_min + dmw = bin_edges[1] - bin_edges[0] + bin_edges_plot = bin_edges + dmw / 2 + + # u3etas_emax = u3etas_max + # plot 95% range as rectangles + rectangles = [] + for i in range(len(bin_edges)): + width = dmw / 2 + height = u3etas_high[i] - u3etas_low[i] + xi = bin_edges[i] + width / 2 + yi = u3etas_low[i] + rect = matplotlib.patches.Rectangle((xi, yi), width, height) + rectangles.append(rect) + pc = matplotlib.collections.PatchCollection(rectangles, facecolor='blue', + alpha=0.3, edgecolor='blue') + ax.add_collection(pc) + # plot whiskers + sim_label = plot_args.get('sim_label', 'Simulated Catalogs') + obs_label = plot_args.get('obs_label', 'Observed Catalog') + xlim = plot_args.get('xlim', None) + title = plot_args.get('title', "UCERF3-ETAS Histogram") + filename = plot_args.get('filename', None) + + ax.errorbar(bin_edges_plot, u3etas_median, yerr=[u3etas_emin, u3etas_emax], + xerr=0.8 * dmw / 2, fmt=' ', label=sim_label, + color='blue', alpha=0.7) + ax.plot(bin_edges_plot, obs_hist, '.k', markersize=10, label=obs_label) + ax.legend(loc='upper right') + ax.set_xlim(xlim) + ax.set_xlabel('Magnitude') + ax.set_ylabel('Event count per magnitude bin') + ax.set_title(title) + # ax.annotate(str(comcat), xycoords='axes fraction', xy=xycoords, fontsize=10, annotation_clip=False) + # pyplot.subplots_adjust(right=0.75) + if filename is not None: + fig.savefig(filename + '.pdf') + fig.savefig(filename + '.png', dpi=300) + if show: + pyplot.show() + return ax + + +
+[docs] +def plot_magnitude_histogram(catalogs, comcat, show=True, plot_args=None): + """ Generates a magnitude histogram from a catalog-based forecast """ + # get list of magnitudes list of ndarray + plot_args = plot_args or {} + catalogs_mws = list(map(lambda x: x.get_magnitudes(), catalogs)) + obs_mw = comcat.get_magnitudes() + n_obs = comcat.get_number_of_events() + + # get histogram at arbitrary values + mws = CSEP_MW_BINS + dmw = mws[1] - mws[0] + + def get_hist(x, mws, normed=True): + n_temp = len(x) + if normed and n_temp != 0: + temp_scale = n_obs / n_temp + hist = numpy.histogram(x, bins=mws)[0] * temp_scale + else: + hist = numpy.histogram(x, bins=mws)[0] + return hist + + # get hist values + u3etas_hist = numpy.array( + list(map(lambda x: get_hist(x, mws), catalogs_mws))) + obs_hist, bin_edges = numpy.histogram(obs_mw, bins=mws) + bin_edges_plot = (bin_edges[1:] + bin_edges[:-1]) / 2 + + figsize = plot_args.get('figsize', None) + fig = pyplot.figure(figsize=figsize) + ax = fig.gca() + u3etas_median = numpy.median(u3etas_hist, axis=0) + u3etas_low = numpy.percentile(u3etas_hist, 2.5, axis=0) + u3etas_high = numpy.percentile(u3etas_hist, 97.5, axis=0) + u3etas_min = numpy.min(u3etas_hist, axis=0) + u3etas_max = numpy.max(u3etas_hist, axis=0) + u3etas_emax = u3etas_max - u3etas_median + u3etas_emin = u3etas_median - u3etas_min + + # u3etas_emax = u3etas_max + # plot 95% range as rectangles + rectangles = [] + for i in range(len(mws) - 1): + width = dmw / 2 + height = u3etas_high[i] - u3etas_low[i] + xi = mws[i] + width / 2 + yi = u3etas_low[i] + rect = matplotlib.patches.Rectangle((xi, yi), width, height) + rectangles.append(rect) + pc = matplotlib.collections.PatchCollection(rectangles, facecolor='blue', + alpha=0.3, edgecolor='blue') + ax.add_collection(pc) + # plot whiskers + sim_label = plot_args.get('sim_label', 'Simulated Catalogs') + xlim = plot_args.get('xlim', None) + title = plot_args.get('title', "UCERF3-ETAS Histogram") + xycoords = plot_args.get('xycoords', (1.00, 0.40)) + filename = plot_args.get('filename', None) + + pyplot.errorbar(bin_edges_plot, u3etas_median, + yerr=[u3etas_emin, u3etas_emax], xerr=0.8 * dmw / 2, + fmt=' ', + label=sim_label, color='blue', alpha=0.7) + pyplot.plot(bin_edges_plot, obs_hist, '.k', markersize=10, label='Comcat') + pyplot.legend(loc='upper right') + pyplot.xlim(xlim) + pyplot.xlabel('Mw') + pyplot.ylabel('Count') + pyplot.title(title) + # ax.annotate(str(comcat), xycoords='axes fraction', xy=xycoords, fontsize=10, annotation_clip=False) + pyplot.subplots_adjust(right=0.75) + if filename is not None: + fig.savefig(filename + '.pdf') + fig.savefig(filename + '.png', dpi=300) + if show: + pyplot.show()
+ + + +
+[docs] +def plot_basemap(basemap, extent, ax=None, figsize=None, coastline=True, + borders=False, tile_scaling='auto', + set_global=False, projection=ccrs.PlateCarree(), apprx=False, + central_latitude=0.0, + linecolor='black', linewidth=True, + grid=False, grid_labels=False, grid_fontsize=None, + show=False): + """ Wrapper function for multiple cartopy base plots, including access to standard raster webservices + + Args: + basemap (str): Possible values are: stock_img, stamen_terrain, stamen_terrain-background, google-satellite, ESRI_terrain, ESRI_imagery, ESRI_relief, ESRI_topo, ESRI_terrain, or webservice link (see examples in :func:`csep.utils.plots._get_basemap`. Default is None + extent (list): [lon_min, lon_max, lat_min, lat_max] + ax (:class:`matplotlib.pyplot.ax`): Previously defined ax object + figsize (tuple): If no ax is provided, a tuple of floats can be provided to define figure size + coastline (str): Flag to plot coastline. default True, + borders (bool): Flag to plot country borders. default False, + tile_scaling (str/int): Zoom level (1-12) of the basemap tiles. If 'auto', is automatically derived from extent + set_global (bool): Display the complete globe as basemap + projection (:class:`cartopy.crs.Projection`): Projection to be used in the basemap + apprx (bool): If true, approximates transformation by setting aspect ratio of axes based on middle latitude + central_latitude (float): average latitude from plotting region + linecolor (str): Color of borders and coast lines. default 'black', + linewidth (float): Line width of borders and coast lines. default 1.5, + grid (bool): Draws a grid in the basemap + grid_labels (bool): Annotate grid values + grid_fontsize (float): Font size of the grid x and y labels + show (bool): Flag if the figure is displayed + + Returns: + :class:`matplotlib.pyplot.ax` object + + """ + if ax is None: + if apprx: + projection = ccrs.PlateCarree() + fig = pyplot.figure(figsize=figsize) + ax = fig.add_subplot(111, projection=projection) + # Set plot aspect according to local longitude-latitude ratio in metric units + # (only compatible with plain PlateCarree "projection") + LATKM = 110.574 # length of a ° of latitude [km]; constant --> ignores Earth's flattening + ax.set_aspect( + LATKM / (111.320 * numpy.cos(numpy.deg2rad(central_latitude)))) + else: + fig = pyplot.figure(figsize=figsize) + ax = fig.add_subplot(111, projection=projection) + + if set_global: + ax.set_global() + else: + ax.set_extent(extents=extent, crs=ccrs.PlateCarree()) + + try: + # Set adaptive scaling + line_autoscaler = cartopy.feature.AdaptiveScaler('110m', ( + ('50m', 50), ('10m', 5))) + tile_autoscaler = cartopy.feature.AdaptiveScaler(5, ((6, 50), (7, 15))) + tiles = None + # Set tile depth + if tile_scaling == 'auto': + tile_depth = 4 if set_global else tile_autoscaler.scale_from_extent( + extent) + else: + tile_depth = tile_scaling + if coastline: + ax.coastlines(color=linecolor, linewidth=linewidth) + if borders: + borders = cartopy.feature.NaturalEarthFeature('cultural', + 'admin_0_boundary_lines_land', + line_autoscaler, + edgecolor=linecolor, + facecolor='never') + ax.add_feature(borders, linewidth=linewidth) + if basemap == 'stock_img': + ax.stock_img() + elif basemap is not None: + tiles = _get_basemap(basemap) + if tiles: + ax.add_image(tiles, tile_depth) + except: + print( + "Unable to plot basemap. This might be due to no internet access, try pre-downloading the files.") + + # Gridline options + if grid: + gl = ax.gridlines(draw_labels=grid_labels, alpha=0.5) + gl.right_labels = False + gl.top_labels = False + gl.xlabel_style['fontsize'] = grid_fontsize + gl.ylabel_style['fontsize'] = grid_fontsize + gl.xformatter = LONGITUDE_FORMATTER + gl.yformatter = LATITUDE_FORMATTER + + if show: + pyplot.show() + + return ax
+ + + +
+[docs] +def plot_catalog(catalog, ax=None, show=False, extent=None, set_global=False, + plot_args=None): + """ Plot catalog in a region + + Args: + catalog (:class:`CSEPCatalog`): Catalog object to be plotted + ax (:class:`matplotlib.pyplot.ax`): Previously defined ax object (e.g from plot_spatial_dataset) + show (bool): Flag if the figure is displayed + extent (list): default 1.05-:func:`catalog.region.get_bbox()` + set_global (bool): Display the complete globe as basemap + plot_args (dict): matplotlib and cartopy plot arguments. The dictionary keys are str, whose items can be: + + - :figsize: :class:`tuple`/:class:`list` - default [6.4, 4.8] + - :title: :class:`str` - default :class:`catalog.name` + - :title_size: :class:`int` - default 10 + - :filename: :class:`str` - File to save figure. default None + - :projection: :class:`cartopy.crs.Projection` - default :class:`cartopy.crs.PlateCarree`. Note: this can be + 'fast' to apply an approximate transformation of axes. + - :basemap: :class:`str`/:class:`None`. Possible values are: stock_img, stamen_terrain, stamen_terrain-background, google-satellite, ESRI_terrain, ESRI_imagery, ESRI_relief, ESRI_topo, ESRI_terrain, or webservice link. Default is None + - :coastline: :class:`bool` - Flag to plot coastline. default True, + - :grid: :class:`bool` - default True + - :grid_labels: :class:`bool` - default True + - :grid_fontsize: :class:`float` - default 10.0 + - :marker: :class:`str` - Marker type + - :markersize: :class:`float` - Constant size for all earthquakes + - :markercolor: :class:`str` - Color for all earthquakes + - :borders: :class:`bool` - Flag to plot country borders. default False, + - :region_border: :class:`bool` - Flag to plot the catalog region border. default True, + - :alpha: :class:`float` - Transparency for the earthquakes scatter + - :mag_scale: :class:`float` - Scaling of the scatter + - :legend: :class:`bool` - Flag to display the legend box + - :legend_loc: :class:`int`/:class:`str` - Position of the legend + - :mag_ticks: :class:`list` - Ticks to display in the legend + - :labelspacing: :class:`int` - Separation between legend ticks + - :tile_scaling: :class:`str`/:class:`int`. Zoom level (1-12) of the basemap tiles. If 'auto', is automatically derived from extent + - :linewidth: :class:`float` - Line width of borders and coast lines. default 1.5, + - :linecolor: :class:`str` - Color of borders and coast lines. default 'black', + + Returns: + :class:`matplotlib.pyplot.ax` object + + """ + # Get spatial information for plotting + + # Retrieve plot arguments + plot_args = plot_args or {} + # figure and axes properties + figsize = plot_args.get('figsize', None) + title = plot_args.get('title', catalog.name) + title_size = plot_args.get('title_size', None) + filename = plot_args.get('filename', None) + # scatter properties + markersize = plot_args.get('markersize', 2) + markercolor = plot_args.get('markercolor', 'blue') + markeredgecolor = plot_args.get('markeredgecolor', 'black') + alpha = plot_args.get('alpha', 1) + mag_scale = plot_args.get('mag_scale', 1) + legend = plot_args.get('legend', False) + legend_title = plot_args.get('legend_title', r"Magnitudes") + legend_loc = plot_args.get('legend_loc', 1) + legend_framealpha = plot_args.get('legend_framealpha', None) + legend_fontsize = plot_args.get('legend_fontsize', None) + legend_titlesize = plot_args.get('legend_titlesize', None) + mag_ticks = plot_args.get('mag_ticks', False) + labelspacing = plot_args.get('labelspacing', 1) + region_border = plot_args.get('region_border', True) + legend_borderpad = plot_args.get('legend_borderpad', 0.4) + # cartopy properties + projection = plot_args.get('projection', + ccrs.PlateCarree(central_longitude=0.0)) + basemap = plot_args.get('basemap', None) + coastline = plot_args.get('coastline', True) + grid = plot_args.get('grid', True) + grid_labels = plot_args.get('grid_labels', False) + grid_fontsize = plot_args.get('grid_fontsize', False) + borders = plot_args.get('borders', False) + tile_scaling = plot_args.get('tile_scaling', 'auto') + linewidth = plot_args.get('linewidth', True) + linecolor = plot_args.get('linecolor', 'black') + + bbox = catalog.get_bbox() + if region_border: + try: + bbox = catalog.region.get_bbox() + except AttributeError: + pass + + if extent is None and not set_global: + dh = (bbox[1] - bbox[0]) / 20. + dv = (bbox[3] - bbox[2]) / 20. + extent = [bbox[0] - dh, bbox[1] + dh, bbox[2] - dv, bbox[3] + dv] + + apprx = False + central_latitude = 0.0 + if projection == 'fast': + projection = ccrs.PlateCarree() + apprx = True + n_lats = len(catalog.region.ys) // 2 + central_latitude = catalog.region.ys[n_lats] + + # Instantiage GeoAxes object + if ax is None: + fig = pyplot.figure(figsize=figsize) + ax = fig.add_subplot(111, projection=projection) + + if set_global: + ax.set_global() + region_border = False + else: + ax.set_extent(extents=extent, + crs=ccrs.PlateCarree()) # Defined extent always in lat/lon + + # Basemap plotting + ax = plot_basemap(basemap, extent, ax=ax, coastline=coastline, + borders=borders, tile_scaling=tile_scaling, + linecolor=linecolor, linewidth=linewidth, + projection=projection, apprx=apprx, + central_latitude=central_latitude, set_global=set_global) + + # Scaling function + mw_range = [min(catalog.get_magnitudes()), max(catalog.get_magnitudes())] + + def size_map(markersize, values, scale): + if isinstance(mag_scale, (int, float)): + return (markersize / (scale ** mw_range[0]) * numpy.power(values, + scale)) + elif isinstance(scale, (numpy.ndarray, list)): + return scale + else: + raise ValueError('scale data type not supported') + + ## Plot catalog + scatter = ax.scatter(catalog.get_longitudes(), catalog.get_latitudes(), + s=size_map(markersize, catalog.get_magnitudes(), + mag_scale), + transform=cartopy.crs.PlateCarree(), + color=markercolor, + edgecolors=markeredgecolor, + alpha=alpha) + + # Legend + if legend: + if isinstance(mag_ticks, (tuple, list, numpy.ndarray)): + if not numpy.all([i >= mw_range[0] and i <= mw_range[1] for i in + mag_ticks]): + print( + "Magnitude ticks do not lie within the catalog magnitude range") + elif mag_ticks is False: + mag_ticks = numpy.linspace(mw_range[0], mw_range[1], 4) + handles, labels = scatter.legend_elements(prop="sizes", + num=list(size_map(markersize, + mag_ticks, + mag_scale)), + alpha=0.3) + ax.legend(handles, numpy.round(mag_ticks, 1), + loc=legend_loc, title=legend_title, fontsize=legend_fontsize, + title_fontsize=legend_titlesize, + labelspacing=labelspacing, handletextpad=5, + borderpad=legend_borderpad, framealpha=legend_framealpha) + + if region_border: + try: + pts = catalog.region.tight_bbox() + ax.plot(pts[:, 0], pts[:, 1], lw=1, color='black') + except AttributeError: + pass + # print("unable to get tight bbox") + + # Gridline options + if grid: + gl = ax.gridlines(draw_labels=grid_labels, alpha=0.5) + gl.right_labels = False + gl.top_labels = False + gl.xlabel_style['fontsize'] = grid_fontsize + gl.ylabel_style['fontsize'] = grid_fontsize + gl.xformatter = LONGITUDE_FORMATTER + gl.yformatter = LATITUDE_FORMATTER + + # Figure options + ax.set_title(title, fontsize=title_size, y=1.06) + if filename is not None: + ax.get_figure().savefig(filename + '.pdf') + ax.get_figure().savefig(filename + '.png', dpi=300) + if show: + pyplot.show() + + return ax
+ + + +
+[docs] +def plot_spatial_dataset(gridded, region, ax=None, show=False, extent=None, + set_global=False, plot_args=None): + """ Plot spatial dataset such as data from a gridded forecast + + Args: + gridded (2D :class:`numpy.array`): Values according to `region`, + region (:class:`CartesianGrid2D`): Region in which gridded values are contained + show (bool): Flag if the figure is displayed + extent (list): default :func:`forecast.region.get_bbox()` + set_global (bool): Display the complete globe as basemap + plot_args (dict): matplotlib and cartopy plot arguments. Dict keys are str, whose values can be: + + - :figsize: :class:`tuple`/:class:`list` - default [6.4, 4.8] + - :title: :class:`str` - default None + - :title_size: :class:`int` - default 10 + - :filename: :class:`str` - default None + - :projection: :class:`cartopy.crs.Projection` - default :class:`cartopy.crs.PlateCarree` + - :grid: :class:`bool` - default True + - :grid_labels: :class:`bool` - default True + - :grid_fontsize: :class:`float` - default 10.0 + - :basemap: :class:`str`. Possible values are: stock_img, stamen_terrain, stamen_terrain-background, google-satellite, ESRI_terrain, ESRI_imagery, ESRI_relief, ESRI_topo, ESRI_terrain, or webservice link. Default is None + - :coastline: :class:`bool` - Flag to plot coastline. default True, + - :borders: :class:`bool` - Flag to plot country borders. default False, + - :region_border: :class:`bool` - Flag to plot the dataset region border. default True, + - :tile_scaling: :class:`str`/:class:`int`. Zoom level (1-12) of the basemap tiles. If 'auto', is automatically derived from extent + - :linewidth: :class:`float` - Line width of borders and coast lines. default 1.5, + - :linecolor: :class:`str` - Color of borders and coast lines. default 'black', + - :cmap: :class:`str`/:class:`pyplot.colors.Colormap` - default 'viridis' + - :clim: :class:`list` - Range of the colorbar. default None + - :clabel: :class:`str` - Label of the colorbar. default None + - :clabel_fontsize: :class:`float` - default None + - :cticks_fontsize: :class:`float` - default None + - :alpha: :class:`float` - default 1 + - :alpha_exp: :class:`float` - Exponent for the alpha func (recommended between 0.4 and 1). default 0 + + + Returns: + :class:`matplotlib.pyplot.ax` object + + + """ + # Get spatial information for plotting + bbox = region.get_bbox() + if extent is None and not set_global: + extent = [bbox[0], bbox[1], bbox[2], bbox[3]] + + # Retrieve plot arguments + plot_args = plot_args or {} + # figure and axes properties + figsize = plot_args.get('figsize', None) + title = plot_args.get('title', None) + title_size = plot_args.get('title_size', None) + filename = plot_args.get('filename', None) + # cartopy properties + projection = plot_args.get('projection', + ccrs.PlateCarree(central_longitude=0.0)) + grid = plot_args.get('grid', True) + grid_labels = plot_args.get('grid_labels', False) + grid_fontsize = plot_args.get('grid_fontsize', False) + basemap = plot_args.get('basemap', None) + coastline = plot_args.get('coastline', True) + borders = plot_args.get('borders', False) + tile_scaling = plot_args.get('tile_scaling', 'auto') + linewidth = plot_args.get('linewidth', True) + linecolor = plot_args.get('linecolor', 'black') + region_border = plot_args.get('region_border', True) + # color bar properties + include_cbar = plot_args.get('include_cbar', True) + cmap = plot_args.get('cmap', None) + clim = plot_args.get('clim', None) + clabel = plot_args.get('clabel', None) + clabel_fontsize = plot_args.get('clabel_fontsize', None) + cticks_fontsize = plot_args.get('cticks_fontsize', None) + alpha = plot_args.get('alpha', 1) + alpha_exp = plot_args.get('alpha_exp', 0) + + apprx = False + central_latitude = 0.0 + if projection == 'fast': + projection = ccrs.PlateCarree() + apprx = True + n_lats = len(region.ys) // 2 + central_latitude = region.ys[n_lats] + + # Instantiate GeoAxes object + if ax is None: + fig = pyplot.figure(figsize=figsize) + ax = fig.add_subplot(111, projection=projection) + else: + fig = ax.get_figure() + + if set_global: + ax.set_global() + region_border = False + else: + ax.set_extent(extents=extent, + crs=ccrs.PlateCarree()) # Defined extent always in lat/lon + + # Basemap plotting + ax = plot_basemap(basemap, extent, ax=ax, coastline=coastline, + borders=borders, + linecolor=linecolor, linewidth=linewidth, + projection=projection, apprx=apprx, + central_latitude=central_latitude, + tile_scaling=tile_scaling, set_global=set_global) + + ## Define colormap and transparency function + if isinstance(cmap, str) or not cmap: + cmap = pyplot.get_cmap(cmap) + cmap_tup = cmap(numpy.arange(cmap.N)) + if isinstance(alpha_exp, (float, int)): + if alpha_exp != 0: + cmap_tup[:, -1] = numpy.linspace(0, 1, cmap.N) ** alpha_exp + alpha = None + cmap = matplotlib.colors.ListedColormap(cmap_tup) + + ## Plot spatial dataset + lons, lats = numpy.meshgrid(numpy.append(region.xs, bbox[1]), + numpy.append(region.ys, bbox[3])) + im = ax.pcolor(lons, lats, gridded, cmap=cmap, alpha=alpha, snap=True, + transform=ccrs.PlateCarree()) + im.set_clim(clim) + + # Colorbar options + # create an axes on the right side of ax. The width of cax will be 5% + # of ax and the padding between cax and ax will be fixed at 0.05 inch. + if include_cbar: + cax = fig.add_axes( + [ax.get_position().x1 + 0.01, ax.get_position().y0, 0.025, + ax.get_position().height], + label='Colorbar') + cbar = fig.colorbar(im, ax=ax, cax=cax) + cbar.set_label(clabel, fontsize=clabel_fontsize) + cbar.ax.tick_params(labelsize=cticks_fontsize) + + # Gridline options + if grid: + gl = ax.gridlines(draw_labels=grid_labels, alpha=0.5) + gl.right_labels = False + gl.top_labels = False + gl.xlabel_style['fontsize'] = grid_fontsize + gl.ylabel_style['fontsize'] = grid_fontsize + gl.xformatter = LONGITUDE_FORMATTER + gl.yformatter = LATITUDE_FORMATTER + + if region_border: + pts = region.tight_bbox() + ax.plot(pts[:, 0], pts[:, 1], lw=1, color='black', + transform=ccrs.PlateCarree()) + + # matplotlib figure options + ax.set_title(title, y=1.06, fontsize=title_size) + if filename is not None: + ax.get_figure().savefig(filename + '.pdf') + ax.get_figure().savefig(filename + '.png', dpi=300) + if show: + pyplot.show() + + return ax
+ + + +
+[docs] +def plot_number_test(evaluation_result, axes=None, show=True, plot_args=None): + """ + Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation + for the n-test. + + Args: + evaluation_result: object-like var that implements the interface of the above EvaluationResult + axes (matplotlib.Axes): axes object used to chain this plot + show (bool): if true, call pyplot.show() + plot_args(dict): optional argument containing a dictionary of plotting arguments, with keys as strings and items as described below + + Optional plotting arguments: + * figsize: (:class:`list`/:class:`tuple`) - default: [6.4, 4.8] + * title: (:class:`str`) - default: name of the first evaluation result type + * title_fontsize: (:class:`float`) Fontsize of the plot title - default: 10 + * xlabel: (:class:`str`) - default: 'X' + * xlabel_fontsize: (:class:`float`) - default: 10 + * xticks_fontsize: (:class:`float`) - default: 10 + * ylabel_fontsize: (:class:`float`) - default: 10 + * text_fontsize: (:class:`float`) - default: 14 + * tight_layout: (:class:`bool`) Set matplotlib.figure.tight_layout to remove excess blank space in the plot - default: True + * percentile (:class:`float`) Critial region to shade on histogram - default: 95 + * bins: (:class:`str`) - Set binning type. see matplotlib.hist for more info - default: 'auto' + * xy: (:class:`list`/:class:`tuple`) - default: (0.55, 0.3) + + Returns: + ax (matplotlib.axes.Axes): can be used to modify the figure + + """ + + # chain plotting axes if requested + if axes: + chained = True + else: + chained = False + + # default plotting arguments + plot_args = plot_args or {} + title = plot_args.get('title', 'Number Test') + title_fontsize = plot_args.get('title_fontsize', None) + xlabel = plot_args.get('xlabel', 'Event count of catalogs') + xlabel_fontsize = plot_args.get('xlabel_fontsize', None) + ylabel = plot_args.get('ylabel', 'Number of catalogs') + ylabel_fontsize = plot_args.get('ylabel_fontsize', None) + text_fontsize = plot_args.get('text_fontsize', 14) + tight_layout = plot_args.get('tight_layout', True) + percentile = plot_args.get('percentile', 95) + filename = plot_args.get('filename', None) + bins = plot_args.get('bins', 'auto') + xy = plot_args.get('xy', (0.5, 0.3)) + + # set default plotting arguments + fixed_plot_args = {'obs_label': evaluation_result.obs_name, + 'sim_label': evaluation_result.sim_name} + plot_args.update(fixed_plot_args) + ax = plot_histogram(evaluation_result.test_distribution, + evaluation_result.observed_statistic, + catalog=evaluation_result.obs_catalog_repr, + plot_args=plot_args, + bins=bins, + axes=axes, + percentile=percentile) + + # annotate plot with p-values + if not chained: + try: + ax.annotate( + '$\delta_1 = P(X \geq x) = {:.2f}$\n$\delta_2 = P(X \leq x) = {:.2f}$\n$\omega = {:d}$' + .format(*evaluation_result.quantile, + evaluation_result.observed_statistic), + xycoords='axes fraction', + xy=xy, + fontsize=text_fontsize) + except: + ax.annotate('$\gamma = P(X \leq x) = {:.2f}$\n$\omega = {:.2f}$' + .format(evaluation_result.quantile, + evaluation_result.observed_statistic), + xycoords='axes fraction', + xy=xy, + fontsize=text_fontsize) + + ax.set_title(title, fontsize=title_fontsize) + ax.set_xlabel(xlabel, fontsize=xlabel_fontsize) + ax.set_ylabel(ylabel, fontsize=ylabel_fontsize) + + if tight_layout: + ax.figure.tight_layout() + + if filename is not None: + ax.figure.savefig(filename + '.pdf') + ax.figure.savefig(filename + '.png', dpi=300) + + # func has different return types, before release refactor and remove plotting from evaluation. + # plotting should be separated from evaluation. + # evaluation should return some object that can be plotted maybe with verbose option. + if show: + pyplot.show() + + return ax
+ + + +
+[docs] +def plot_magnitude_test(evaluation_result, axes=None, show=True, + plot_args=None): + """ + Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation + for the M-test. + + Args: + evaluation_result: object-like var that implements the interface of the above EvaluationResult + axes (matplotlib.Axes): axes object used to chain this plot + show (bool): if true, call pyplot.show() + plot_args(dict): optional argument containing a dictionary of plotting arguments, with keys as strings and items as described below + + Optional plotting arguments: + * figsize: (:class:`list`/:class:`tuple`) - default: [6.4, 4.8] + * title: (:class:`str`) - default: name of the first evaluation result type + * title_fontsize: (:class:`float`) Fontsize of the plot title - default: 10 + * xlabel: (:class:`str`) - default: 'X' + * xlabel_fontsize: (:class:`float`) - default: 10 + * xticks_fontsize: (:class:`float`) - default: 10 + * ylabel_fontsize: (:class:`float`) - default: 10 + * tight_layout: (:class:`bool`) Set matplotlib.figure.tight_layout to remove excess blank space in the plot - default: True + * percentile (:class:`float`) Critial region to shade on histogram - default: 95 + * bins: (:class:`str`) - Set binning type. see matplotlib.hist for more info - default: 'auto' + * xy: (:class:`list`/:class:`tuple`) - default: (0.55, 0.6) + + Returns: + ax (matplotlib.Axes): containing the new plot + + """ + plot_args = plot_args or {} + title = plot_args.get('title', 'Magnitude Test') + title_fontsize = plot_args.get('title_fontsize', None) + xlabel = plot_args.get('xlabel', 'D* Statistic') + xlabel_fontsize = plot_args.get('xlabel_fontsize', None) + ylabel = plot_args.get('ylabel', 'Number of catalogs') + ylabel_fontsize = plot_args.get('ylabel_fontsize', None) + tight_layout = plot_args.get('tight_layout', True) + percentile = plot_args.get('percentile', 95) + text_fontsize = plot_args.get('text_fontsize', 14) + filename = plot_args.get('filename', None) + bins = plot_args.get('bins', 'auto') + xy = plot_args.get('xy', (0.55, 0.6)) + + # handle plotting + if axes: + chained = True + else: + chained = False + + # supply fixed arguments to plots + # might want to add other defaults here + fixed_plot_args = {'obs_label': evaluation_result.obs_name, + 'sim_label': evaluation_result.sim_name} + plot_args.update(fixed_plot_args) + ax = plot_histogram(evaluation_result.test_distribution, + evaluation_result.observed_statistic, + catalog=evaluation_result.obs_catalog_repr, + plot_args=plot_args, + bins=bins, + axes=axes, + percentile=percentile) + + # annotate plot with quantile values + if not chained: + try: + ax.annotate('$\gamma = P(X \geq x) = {:.2f}$\n$\omega = {:.2f}$' + .format(evaluation_result.quantile, + evaluation_result.observed_statistic), + xycoords='axes fraction', + xy=xy, + fontsize=text_fontsize) + except TypeError: + # if both quantiles are provided, we want to plot the greater-equal quantile + ax.annotate('$\gamma = P(X \geq x) = {:.2f}$\n$\omega = {:.2f}$' + .format(evaluation_result.quantile[0], + evaluation_result.observed_statistic), + xycoords='axes fraction', + xy=xy, + fontsize=text_fontsize) + + ax.set_title(title, fontsize=title_fontsize) + ax.set_xlabel(xlabel, fontsize=xlabel_fontsize) + ax.set_ylabel(ylabel, fontsize=ylabel_fontsize) + + if tight_layout: + var = ax.get_figure().tight_layout + () + + if filename is not None: + ax.figure.savefig(filename + '.pdf') + ax.figure.savefig(filename + '.png', dpi=300) + + # func has different return types, before release refactor and remove plotting from evaluation. + # plotting should be separated from evaluation. + # evaluation should return some object that can be plotted maybe with verbose option. + if show: + pyplot.show() + + return ax
+ + + +
+[docs] +def plot_distribution_test(evaluation_result, axes=None, show=True, + plot_args=None): + """ + Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation + for the M-test. + + + Args: + evaluation_result: object-like var that implements the interface of the above EvaluationResult + + Returns: + ax (matplotlib.axes.Axes): can be used to modify the figure + + """ + plot_args = plot_args or {} + # handle plotting + if axes: + chained = True + else: + chained = False + # supply fixed arguments to plots + # might want to add other defaults here + filename = plot_args.get('filename', None) + xlabel = plot_args.get('xlabel', '') + ylabel = plot_args.get('ylabel', '') + fixed_plot_args = {'obs_label': evaluation_result.obs_name, + 'sim_label': evaluation_result.sim_name} + plot_args.update(fixed_plot_args) + bins = plot_args.get('bins', 'auto') + percentile = plot_args.get('percentile', 95) + ax = plot_histogram(evaluation_result.test_distribution, + evaluation_result.observed_statistic, + catalog=evaluation_result.obs_catalog_repr, + plot_args=plot_args, + bins=bins, + axes=axes, + percentile=percentile) + + # annotate plot with p-values + if not chained: + ax.annotate('$\gamma = P(X \leq x) = {:.3f}$\n$\omega$ = {:.3f}' + .format(evaluation_result.quantile, + evaluation_result.observed_statistic), + xycoords='axes fraction', + xy=(0.5, 0.3), + fontsize=14) + + title = plot_args.get('title', evaluation_result.name) + ax.set_title(title, fontsize=14) + ax.set_xlabel(xlabel) + ax.set_ylabel(ylabel) + + if filename is not None: + ax.figure.savefig(filename + '.pdf') + ax.figure.savefig(filename + '.png', dpi=300) + + # func has different return types, before release refactor and remove plotting from evaluation. + # plotting should be separated from evaluation. + # evaluation should return some object that can be plotted maybe with verbose option. + if show: + pyplot.show() + + return ax
+ + + +
+[docs] +def plot_likelihood_test(evaluation_result, axes=None, show=True, + plot_args=None): + """ + Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation + for the L-test. + + Args: + evaluation_result: object-like var that implements the interface of the above EvaluationResult + axes (matplotlib.Axes): axes object used to chain this plot + show (bool): if true, call pyplot.show() + plot_args(dict): optional argument containing a dictionary of plotting arguments, with keys as strings and items as described below + + Optional plotting arguments: + * figsize: (:class:`list`/:class:`tuple`) - default: [6.4, 4.8] + * title: (:class:`str`) - default: name of the first evaluation result type + * title_fontsize: (:class:`float`) Fontsize of the plot title - default: 10 + * xlabel: (:class:`str`) - default: 'X' + * xlabel_fontsize: (:class:`float`) - default: 10 + * xticks_fontsize: (:class:`float`) - default: 10 + * ylabel_fontsize: (:class:`float`) - default: 10 + * text_fontsize: (:class:`float`) - default: 14 + * tight_layout: (:class:`bool`) Set matplotlib.figure.tight_layout to remove excess blank space in the plot - default: True + * percentile (:class:`float`) Critial region to shade on histogram - default: 95 + * bins: (:class:`str`) - Set binning type. see matplotlib.hist for more info - default: 'auto' + * xy: (:class:`list`/:class:`tuple`) - default: (0.55, 0.3) + + Returns: + ax (matplotlib.axes.Axes): can be used to modify the figure + """ + plot_args = plot_args or {} + title = plot_args.get('title', 'Pseudo-likelihood Test') + title_fontsize = plot_args.get('title_fontsize', None) + xlabel = plot_args.get('xlabel', 'Pseudo likelihood') + xlabel_fontsize = plot_args.get('xlabel_fontsize', None) + ylabel = plot_args.get('ylabel', 'Number of catalogs') + ylabel_fontsize = plot_args.get('ylabel_fontsize', None) + text_fontsize = plot_args.get('text_fontsize', 14) + tight_layout = plot_args.get('tight_layout', True) + percentile = plot_args.get('percentile', 95) + filename = plot_args.get('filename', None) + bins = plot_args.get('bins', 'auto') + xy = plot_args.get('xy', (0.55, 0.3)) + + # handle plotting + if axes: + chained = True + else: + chained = False + # supply fixed arguments to plots + # might want to add other defaults here + fixed_plot_args = {'obs_label': evaluation_result.obs_name, + 'sim_label': evaluation_result.sim_name} + plot_args.update(fixed_plot_args) + ax = plot_histogram(evaluation_result.test_distribution, + evaluation_result.observed_statistic, + catalog=evaluation_result.obs_catalog_repr, + plot_args=plot_args, + bins=bins, + axes=axes, + percentile=percentile) + + # annotate plot with p-values + if not chained: + try: + ax.annotate('$\gamma = P(X \leq x) = {:.2f}$\n$\omega = {:.2f}$' + .format(evaluation_result.quantile, + evaluation_result.observed_statistic), + xycoords='axes fraction', + xy=xy, + fontsize=text_fontsize) + except TypeError: + # if both quantiles are provided, we want to plot the greater-equal quantile + ax.annotate('$\gamma = P(X \leq x) = {:.2f}$\n$\omega = {:.2f}$' + .format(evaluation_result.quantile[1], + evaluation_result.observed_statistic), + xycoords='axes fraction', + xy=xy, + fontsize=text_fontsize) + + ax.set_title(title, fontsize=title_fontsize) + ax.set_xlabel(xlabel, fontsize=xlabel_fontsize) + ax.set_ylabel(ylabel, fontsize=ylabel_fontsize) + + if tight_layout: + ax.figure.tight_layout() + + if filename is not None: + ax.figure.savefig(filename + '.pdf') + ax.figure.savefig(filename + '.png', dpi=300) + + # func has different return types, before release refactor and remove plotting from evaluation. + # plotting should be separated from evaluation. + # evaluation should return some object that can be plotted maybe with verbose option. + if show: + pyplot.show() + return ax
+ + + +
+[docs] +def plot_spatial_test(evaluation_result, axes=None, plot_args=None, show=True): + """ + Plot spatial test result from catalog based forecast + + Args: + evaluation_result: object-like var that implements the interface of the above EvaluationResult + axes (matplotlib.Axes): axes object used to chain this plot + show (bool): if true, call pyplot.show() + plot_args(dict): optional argument containing a dictionary of plotting arguments, with keys as strings and items as described below + + Optional plotting arguments: + * figsize: (:class:`list`/:class:`tuple`) - default: [6.4, 4.8] + * title: (:class:`str`) - default: name of the first evaluation result type + * title_fontsize: (:class:`float`) Fontsize of the plot title - default: 10 + * xlabel: (:class:`str`) - default: 'X' + * xlabel_fontsize: (:class:`float`) - default: 10 + * xticks_fontsize: (:class:`float`) - default: 10 + * ylabel_fontsize: (:class:`float`) - default: 10 + * text_fontsize: (:class:`float`) - default: 14 + * tight_layout: (:class:`bool`) Set matplotlib.figure.tight_layout to remove excess blank space in the plot - default: True + * percentile (:class:`float`) Critial region to shade on histogram - default: 95 + * bins: (:class:`str`) - Set binning type. see matplotlib.hist for more info - default: 'auto' + * xy: (:class:`list`/:class:`tuple`) - default: (0.2, 0.6) + + Returns: + ax (matplotlib.axes.Axes): can be used to modify the figure + """ + + plot_args = plot_args or {} + title = plot_args.get('title', 'Spatial Test') + title_fontsize = plot_args.get('title_fontsize', None) + xlabel = plot_args.get('xlabel', 'Normalized pseudo-likelihood') + xlabel_fontsize = plot_args.get('xlabel_fontsize', None) + ylabel = plot_args.get('ylabel', 'Number of catalogs') + ylabel_fontsize = plot_args.get('ylabel_fontsize', None) + text_fontsize = plot_args.get('text_fontsize', 14) + tight_layout = plot_args.get('tight_layout', True) + percentile = plot_args.get('percentile', 95) + filename = plot_args.get('filename', None) + bins = plot_args.get('bins', 'auto') + xy = plot_args.get('xy', (0.2, 0.6)) + + # handle plotting + if axes: + chained = True + else: + chained = False + + # supply fixed arguments to plots + # might want to add other defaults here + fixed_plot_args = {'obs_label': evaluation_result.obs_name, + 'sim_label': evaluation_result.sim_name} + plot_args.update(fixed_plot_args) + + ax = plot_histogram(evaluation_result.test_distribution, + evaluation_result.observed_statistic, + catalog=evaluation_result.obs_catalog_repr, + plot_args=plot_args, + bins=bins, + axes=axes, + percentile=percentile) + + # annotate plot with p-values + if not chained: + try: + ax.annotate('$\gamma = P(X \leq x) = {:.2f}$\n$\omega = {:.2f}$' + .format(evaluation_result.quantile, + evaluation_result.observed_statistic), + xycoords='axes fraction', + xy=xy, + fontsize=text_fontsize) + except TypeError: + # if both quantiles are provided, we want to plot the greater-equal quantile + ax.annotate('$\gamma = P(X \leq x) = {:.2f}$\n$\omega = {:.2f}$' + .format(evaluation_result.quantile[1], + evaluation_result.observed_statistic), + xycoords='axes fraction', + xy=xy, + fontsize=text_fontsize) + + ax.set_title(title, fontsize=title_fontsize) + ax.set_xlabel(xlabel, fontsize=xlabel_fontsize) + ax.set_ylabel(ylabel, fontsize=ylabel_fontsize) + + if tight_layout: + ax.figure.tight_layout() + + if filename is not None: + ax.figure.savefig(filename + '.pdf') + ax.figure.savefig(filename + '.png', dpi=300) + + # func has different return types, before release refactor and remove plotting from evaluation. + # plotting should be separated from evaluation. + # evaluation should return some object that can be plotted maybe with verbose option. + if show: + pyplot.show() + + return ax
+ + + +
+[docs] +def plot_calibration_test(evaluation_result, axes=None, plot_args=None, + show=False): + # set up QQ plots and KS test + plot_args = plot_args or {} + n = len(evaluation_result.test_distribution) + k = numpy.arange(1, n + 1) + # plotting points for uniform quantiles + pp = k / (n + 1) + # compute confidence intervals for order statistics using beta distribution + ulow = scipy.stats.beta.ppf(0.025, k, n - k + 1) + uhigh = scipy.stats.beta.ppf(0.975, k, n - k + 1) + + # get stuff from plot_args + label = plot_args.get('label', evaluation_result.sim_name) + xlim = plot_args.get('xlim', [0, 1.05]) + ylim = plot_args.get('ylim', [0, 1.05]) + xlabel = plot_args.get('xlabel', 'Quantile scores') + ylabel = plot_args.get('ylabel', 'Standard uniform quantiles') + color = plot_args.get('color', 'tab:blue') + marker = plot_args.get('marker', 'o') + size = plot_args.get('size', 5) + legend_loc = plot_args.get('legend_loc', 'best') + + # quantiles should be sorted for plotting + sorted_td = numpy.sort(evaluation_result.test_distribution) + + if axes is None: + fig, ax = pyplot.subplots() + else: + ax = axes + + # plot qq plot + _ = ax.scatter(sorted_td, pp, label=label, c=color, marker=marker, s=size) + # plot uncertainty on uniform quantiles + ax.plot(pp, pp, '-k') + ax.plot(ulow, pp, ':k') + ax.plot(uhigh, pp, ':k') + + ax.set_ylim(ylim) + ax.set_xlim(xlim) + ax.set_xlabel(xlabel) + ax.set_ylabel(ylabel) + + ax.legend(loc=legend_loc) + + if show: + pyplot.show() + + return ax
+ + + +
+[docs] +def plot_poisson_consistency_test(eval_results, normalize=False, + one_sided_lower=False, axes=None, + plot_args=None, show=False): + """ Plots results from CSEP1 tests following the CSEP1 convention. + + Note: All of the evaluations should be from the same type of evaluation, otherwise the results will not be + comparable on the same figure. + + Args: + results (list): Contains the tests results :class:`csep.core.evaluations.EvaluationResult` (see note above) + normalize (bool): select this if the forecast likelihood should be normalized by the observed likelihood. useful + for plotting simulation based simulation tests. + one_sided_lower (bool): select this if the plot should be for a one sided test + plot_args(dict): optional argument containing a dictionary of plotting arguments, with keys as strings and items as described below + + Optional plotting arguments: + * figsize: (:class:`list`/:class:`tuple`) - default: [6.4, 4.8] + * title: (:class:`str`) - default: name of the first evaluation result type + * title_fontsize: (:class:`float`) Fontsize of the plot title - default: 10 + * xlabel: (:class:`str`) - default: 'X' + * xlabel_fontsize: (:class:`float`) - default: 10 + * xticks_fontsize: (:class:`float`) - default: 10 + * ylabel_fontsize: (:class:`float`) - default: 10 + * color: (:class:`float`/:class:`None`) If None, sets it to red/green according to :func:`_get_marker_style` - default: 'black' + * linewidth: (:class:`float`) - default: 1.5 + * capsize: (:class:`float`) - default: 4 + * hbars: (:class:`bool`) Flag to draw horizontal bars for each model - default: True + * tight_layout: (:class:`bool`) Set matplotlib.figure.tight_layout to remove excess blank space in the plot - default: True + + Returns: + ax (:class:`matplotlib.pyplot.axes` object) + """ + + try: + results = list(eval_results) + except TypeError: + results = [eval_results] + results.reverse() + # Parse plot arguments. More can be added here + if plot_args is None: + plot_args = {} + figsize = plot_args.get('figsize', None) + title = plot_args.get('title', results[0].name) + title_fontsize = plot_args.get('title_fontsize', None) + xlabel = plot_args.get('xlabel', '') + xlabel_fontsize = plot_args.get('xlabel_fontsize', None) + xticks_fontsize = plot_args.get('xticks_fontsize', None) + ylabel_fontsize = plot_args.get('ylabel_fontsize', None) + color = plot_args.get('color', 'black') + linewidth = plot_args.get('linewidth', None) + capsize = plot_args.get('capsize', 4) + hbars = plot_args.get('hbars', True) + tight_layout = plot_args.get('tight_layout', True) + percentile = plot_args.get('percentile', 95) + plot_mean = plot_args.get('mean', False) + + if axes is None: + fig, ax = pyplot.subplots(figsize=figsize) + else: + ax = axes + fig = ax.get_figure() + + xlims = [] + for index, res in enumerate(results): + # handle analytical distributions first, they are all in the form ['name', parameters]. + if res.test_distribution[0] == 'poisson': + plow = scipy.stats.poisson.ppf((1 - percentile / 100.) / 2., + res.test_distribution[1]) + phigh = scipy.stats.poisson.ppf(1 - (1 - percentile / 100.) / 2., + res.test_distribution[1]) + mean = res.test_distribution[1] + observed_statistic = res.observed_statistic + # empirical distributions + else: + if normalize: + test_distribution = numpy.array( + res.test_distribution) - res.observed_statistic + observed_statistic = 0 + else: + test_distribution = numpy.array(res.test_distribution) + observed_statistic = res.observed_statistic + # compute distribution depending on type of test + if one_sided_lower: + plow = numpy.percentile(test_distribution, 100 - percentile) + phigh = numpy.percentile(test_distribution, 100) + else: + plow = numpy.percentile(test_distribution, + (100 - percentile) / 2.) + phigh = numpy.percentile(test_distribution, + 100 - (100 - percentile) / 2.) + mean = numpy.mean(test_distribution) + + if not numpy.isinf( + observed_statistic): # Check if test result does not diverges + percentile_lims = numpy.abs(numpy.array([[mean - plow, + phigh - mean]]).T) + ax.plot(observed_statistic, index, + _get_marker_style(observed_statistic, (plow, phigh), + one_sided_lower)) + ax.errorbar(mean, index, xerr=percentile_lims, + fmt='ko' * plot_mean, capsize=capsize, + linewidth=linewidth, ecolor=color) + # determine the limits to use + xlims.append((plow, phigh, observed_statistic)) + # we want to only extent the distribution where it falls outside of it in the acceptable tail + if one_sided_lower: + if observed_statistic >= plow and phigh < observed_statistic: + # draw dashed line to infinity + xt = numpy.linspace(phigh, 99999, 100) + yt = numpy.ones(100) * index + ax.plot(xt, yt, linestyle='--', linewidth=linewidth, + color=color) + + else: + print('Observed statistic diverges for forecast %s, index %i.' + ' Check for zero-valued bins within the forecast' % ( + res.sim_name, index)) + ax.barh(index, 99999, left=-10000, height=1, color=['red'], + alpha=0.5) + + try: + ax.set_xlim(*_get_axis_limits(xlims)) + except ValueError: + raise ValueError('All EvaluationResults have infinite ' + 'observed_statistics') + ax.set_yticks(numpy.arange(len(results))) + ax.set_yticklabels([res.sim_name for res in results], + fontsize=ylabel_fontsize) + ax.set_ylim([-0.5, len(results) - 0.5]) + if hbars: + yTickPos = ax.get_yticks() + if len(yTickPos) >= 2: + ax.barh(yTickPos, numpy.array([99999] * len(yTickPos)), + left=-10000, + height=(yTickPos[1] - yTickPos[0]), color=['w', 'gray'], + alpha=0.2, zorder=0) + ax.set_title(title, fontsize=title_fontsize) + ax.set_xlabel(xlabel, fontsize=xlabel_fontsize) + ax.tick_params(axis='x', labelsize=xticks_fontsize) + if tight_layout: + ax.figure.tight_layout() + fig.tight_layout() + + if show: + pyplot.show() + + return ax
+ + + +
+[docs] +def plot_comparison_test(results_t, results_w=None, axes=None, plot_args=None): + """Plots list of T-Test (and W-Test) Results""" + + if plot_args is None: + plot_args = {} + + figsize = plot_args.get('figsize', None) + title = plot_args.get('title', 'CSEP1 Comparison Test') + xlabel = plot_args.get('xlabel', None) + ylabel = plot_args.get('ylabel', 'Information gain per earthquake') + ylim = plot_args.get('ylim', (None, None)) + capsize = plot_args.get('capsize', 2) + linewidth = plot_args.get('linewidth', 1) + markersize = plot_args.get('markersize', 2) + percentile = plot_args.get('percentile', 95) + xticklabels_rotation = plot_args.get('xticklabels_rotation', 90) + xlabel_fontsize = plot_args.get('xlabel_fontsize', 12) + ylabel_fontsize = plot_args.get('ylabel_fontsize', 12) + + if axes is None: + fig, ax = pyplot.subplots(figsize=figsize) + else: + ax = axes + fig = ax.get_figure() + + ax.axhline(y=0, linestyle='--', color='black') + + Results = zip(results_t, results_w) if results_w else zip(results_t) + + for index, result in enumerate(Results): + result_t = result[0] + result_w = result[1] if results_w else None + + ylow = result_t.observed_statistic - result_t.test_distribution[0] + yhigh = result_t.test_distribution[1] - result_t.observed_statistic + color = _get_marker_t_color(result_t.test_distribution) + + if numpy.isinf(result_t.observed_statistic): + print('Diverging information gain for forecast %s, index %i.' + ' Check for zero-valued bins within the forecast' % + (result_t.sim_name, index)) + ax.axvspan(index - 0.5, index + 0.5, alpha=0.5, facecolor='red') + else: + ax.errorbar(index, result_t.observed_statistic, + yerr=numpy.array([[ylow, yhigh]]).T, color=color, + linewidth=linewidth, capsize=capsize) + + if result_w is not None: + if _get_marker_w_color(result_w.quantile, percentile): + facecolor = _get_marker_t_color(result_t.test_distribution) + else: + facecolor = 'white' + else: + facecolor = 'white' + ax.plot(index, result_t.observed_statistic, marker='o', + markerfacecolor=facecolor, markeredgecolor=color, + markersize=markersize) + + ax.set_xticks(numpy.arange(len(results_t))) + ax.set_xticklabels([res.sim_name[0] for res in results_t], + rotation=xticklabels_rotation, fontsize=xlabel_fontsize) + ax.set_xlabel(xlabel) + ax.set_ylabel(ylabel, fontsize=ylabel_fontsize) + ax.set_title(title) + ax.yaxis.grid() + xTickPos = ax.get_xticks() + ax.yaxis.set_major_locator(matplotlib.ticker.MaxNLocator(integer=True)) + ax.set_ylim([ylim[0], ylim[1]]) + ax.set_xlim([-0.5, len(results_t) - 0.5]) + if len(results_t) > 2: + ax.bar(xTickPos, numpy.array([9999] * len(xTickPos)), bottom=-2000, + width=(xTickPos[1] - xTickPos[0]), color=['gray', 'w'], alpha=0.2) + fig.tight_layout() + + return ax
+ + + +def plot_consistency_test(eval_results, normalize=False, axes=None, + one_sided_lower=False, variance=None, plot_args=None, + show=False): + """ Plots results from CSEP1 tests following the CSEP1 convention. + + Note: All of the evaluations should be from the same type of evaluation, otherwise the results will not be + comparable on the same figure. + + Args: + eval_results (list): Contains the tests results :class:`csep.core.evaluations.EvaluationResult` (see note above) + normalize (bool): select this if the forecast likelihood should be normalized by the observed likelihood. useful + for plotting simulation based simulation tests. + one_sided_lower (bool): select this if the plot should be for a one sided test + plot_args(dict): optional argument containing a dictionary of plotting arguments, with keys as strings and items as described below + + Optional plotting arguments: + * figsize: (:class:`list`/:class:`tuple`) - default: [6.4, 4.8] + * title: (:class:`str`) - default: name of the first evaluation result type + * title_fontsize: (:class:`float`) Fontsize of the plot title - default: 10 + * xlabel: (:class:`str`) - default: 'X' + * xlabel_fontsize: (:class:`float`) - default: 10 + * xticks_fontsize: (:class:`float`) - default: 10 + * ylabel_fontsize: (:class:`float`) - default: 10 + * color: (:class:`float`/:class:`None`) If None, sets it to red/green according to :func:`_get_marker_style` - default: 'black' + * linewidth: (:class:`float`) - default: 1.5 + * capsize: (:class:`float`) - default: 4 + * hbars: (:class:`bool`) Flag to draw horizontal bars for each model - default: True + * tight_layout: (:class:`bool`) Set matplotlib.figure.tight_layout to remove excess blank space in the plot - default: True + + Returns: + ax (:class:`matplotlib.pyplot.axes` object) + """ + + try: + results = list(eval_results) + except TypeError: + results = [eval_results] + results.reverse() + # Parse plot arguments. More can be added here + if plot_args is None: + plot_args = {} + figsize = plot_args.get('figsize', None) + title = plot_args.get('title', results[0].name) + title_fontsize = plot_args.get('title_fontsize', None) + xlabel = plot_args.get('xlabel', '') + xlabel_fontsize = plot_args.get('xlabel_fontsize', None) + xticks_fontsize = plot_args.get('xticks_fontsize', None) + ylabel_fontsize = plot_args.get('ylabel_fontsize', None) + color = plot_args.get('color', 'black') + linewidth = plot_args.get('linewidth', None) + capsize = plot_args.get('capsize', 4) + hbars = plot_args.get('hbars', True) + tight_layout = plot_args.get('tight_layout', True) + percentile = plot_args.get('percentile', 95) + plot_mean = plot_args.get('mean', False) + + if axes is None: + fig, ax = pyplot.subplots(figsize=figsize) + else: + ax = axes + fig = ax.get_figure() + + xlims = [] + + for index, res in enumerate(results): + # handle analytical distributions first, they are all in the form ['name', parameters]. + if res.test_distribution[0] == 'poisson': + plow = scipy.stats.poisson.ppf((1 - percentile / 100.) / 2., + res.test_distribution[1]) + phigh = scipy.stats.poisson.ppf(1 - (1 - percentile / 100.) / 2., + res.test_distribution[1]) + mean = res.test_distribution[1] + observed_statistic = res.observed_statistic + + elif res.test_distribution[0] == 'negative_binomial': + var = variance + observed_statistic = res.observed_statistic + mean = res.test_distribution[1] + upsilon = 1.0 - ((var - mean) / var) + tau = (mean ** 2 / (var - mean)) + plow = scipy.stats.nbinom.ppf((1 - percentile / 100.) / 2., tau, + upsilon) + phigh = scipy.stats.nbinom.ppf(1 - (1 - percentile / 100.) / 2., + tau, upsilon) + + # empirical distributions + else: + if normalize: + test_distribution = numpy.array( + res.test_distribution) - res.observed_statistic + observed_statistic = 0 + else: + test_distribution = numpy.array(res.test_distribution) + observed_statistic = res.observed_statistic + # compute distribution depending on type of test + if one_sided_lower: + plow = numpy.percentile(test_distribution, 5) + phigh = numpy.percentile(test_distribution, 100) + else: + plow = numpy.percentile(test_distribution, 2.5) + phigh = numpy.percentile(test_distribution, 97.5) + mean = numpy.mean(res.test_distribution) + + if not numpy.isinf( + observed_statistic): # Check if test result does not diverges + percentile_lims = numpy.array([[mean - plow, phigh - mean]]).T + ax.plot(observed_statistic, index, + _get_marker_style(observed_statistic, (plow, phigh), + one_sided_lower)) + ax.errorbar(mean, index, xerr=percentile_lims, + fmt='ko' * plot_mean, capsize=capsize, + linewidth=linewidth, ecolor=color) + # determine the limits to use + xlims.append((plow, phigh, observed_statistic)) + # we want to only extent the distribution where it falls outside of it in the acceptable tail + if one_sided_lower: + if observed_statistic >= plow and phigh < observed_statistic: + # draw dashed line to infinity + xt = numpy.linspace(phigh, 99999, 100) + yt = numpy.ones(100) * index + ax.plot(xt, yt, linestyle='--', linewidth=linewidth, + color=color) + + else: + print('Observed statistic diverges for forecast %s, index %i.' + ' Check for zero-valued bins within the forecast' % ( + res.sim_name, index)) + ax.barh(index, 99999, left=-10000, height=1, color=['red'], + alpha=0.5) + + try: + ax.set_xlim(*_get_axis_limits(xlims)) + except ValueError: + raise ValueError( + 'All EvaluationResults have infinite observed_statistics') + ax.set_yticks(numpy.arange(len(results))) + ax.set_yticklabels([res.sim_name for res in results], + fontsize=ylabel_fontsize) + ax.set_ylim([-0.5, len(results) - 0.5]) + if hbars: + yTickPos = ax.get_yticks() + if len(yTickPos) >= 2: + ax.barh(yTickPos, numpy.array([99999] * len(yTickPos)), + left=-10000, + height=(yTickPos[1] - yTickPos[0]), color=['w', 'gray'], + alpha=0.2, zorder=0) + ax.set_title(title, fontsize=title_fontsize) + ax.set_xlabel(xlabel, fontsize=xlabel_fontsize) + ax.tick_params(axis='x', labelsize=xticks_fontsize) + if tight_layout: + ax.figure.tight_layout() + fig.tight_layout() + + if show: + pyplot.show() + + return ax + + +def plot_pvalues_and_intervals(test_results, ax, var=None): + """ Plots p-values and intervals for a list of Poisson or NBD test results + + Args: + test_results (list): list of EvaluationResults for N-test. All tests should use the same distribution + (ie Poisson or NBD). + ax (matplotlib.axes.Axes.axis): axes to use for plot. create using matplotlib + var (float): variance of the NBD distribution. Must be used for NBD plots. + + Returns: + ax (matplotlib.axes.Axes.axis): axes handle containing this plot + + Raises: + ValueError: throws error if NBD tests are supplied without a variance + """ + + variance = var + percentile = 97.5 + p_values = [] + + # Differentiate between N-tests and other consistency tests + if test_results[0].name == 'NBD N-Test' or test_results[ + 0].name == 'Poisson N-Test': + legend_elements = [ + matplotlib.lines.Line2D([0], [0], marker='o', color='red', lw=0, + label=r'p < 10e-5', markersize=10, + markeredgecolor='k'), + matplotlib.lines.Line2D([0], [0], marker='o', color='#FF7F50', + lw=0, label=r'10e-5 $\leq$ p < 10e-4', + markersize=10, markeredgecolor='k'), + matplotlib.lines.Line2D([0], [0], marker='o', color='gold', lw=0, + label=r'10e-4 $\leq$ p < 10e-3', + markersize=10, markeredgecolor='k'), + matplotlib.lines.Line2D([0], [0], marker='o', color='white', lw=0, + label=r'10e-3 $\leq$ p < 0.0125', + markersize=10, markeredgecolor='k'), + matplotlib.lines.Line2D([0], [0], marker='o', color='skyblue', + lw=0, label=r'0.0125 $\leq$ p < 0.025', + markersize=10, markeredgecolor='k'), + matplotlib.lines.Line2D([0], [0], marker='o', color='blue', lw=0, + label=r'p $\geq$ 0.025', markersize=10, + markeredgecolor='k')] + ax.legend(handles=legend_elements, loc=4, fontsize=13, edgecolor='k') + # Act on Negative binomial tests + if test_results[0].name == 'NBD N-Test': + if var is None: + raise ValueError( + "var must not be None if N-tests use the NBD distribution.") + + for i in range(len(test_results)): + mean = test_results[i].test_distribution[1] + upsilon = 1.0 - ((variance - mean) / variance) + tau = (mean ** 2 / (variance - mean)) + phigh97 = scipy.stats.nbinom.ppf( + (1 - percentile / 100.) / 2., tau, upsilon + ) + plow97 = scipy.stats.nbinom.ppf( + 1 - (1 - percentile / 100.) / 2., tau, upsilon + ) + low97 = test_results[i].observed_statistic - plow97 + high97 = phigh97 - test_results[i].observed_statistic + ax.errorbar(test_results[i].observed_statistic, + (len(test_results) - 1) - i, + xerr=numpy.array([[low97, high97]]).T, capsize=4, + color='slategray', alpha=1.0, zorder=0) + p_values.append(test_results[i].quantile[ + 1] * 2.0) # Calculated p-values according to Meletti et al., (2021) + + if p_values[i] < 10e-5: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='red', markersize=8, zorder=2) + if p_values[i] >= 10e-5 and p_values[i] < 10e-4: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='#FF7F50', markersize=8, zorder=2) + if p_values[i] >= 10e-4 and p_values[i] < 10e-3: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='gold', markersize=8, zorder=2) + if p_values[i] >= 10e-3 and p_values[i] < 0.0125: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='white', markersize=8, zorder=2) + if p_values[i] >= 0.0125 and p_values[i] < 0.025: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='skyblue', markersize=8, zorder=2) + if p_values[i] >= 0.025: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='blue', markersize=8, zorder=2) + # Act on Poisson N-test + if test_results[0].name == 'Poisson N-Test': + for i in range(len(test_results)): + plow97 = scipy.stats.poisson.ppf((1 - percentile / 100.) / 2., + test_results[ + i].test_distribution[1]) + phigh97 = scipy.stats.poisson.ppf( + 1 - (1 - percentile / 100.) / 2., + test_results[i].test_distribution[1]) + low97 = test_results[i].observed_statistic - plow97 + high97 = phigh97 - test_results[i].observed_statistic + ax.errorbar(test_results[i].observed_statistic, + (len(test_results) - 1) - i, + xerr=numpy.array([[low97, high97]]).T, capsize=4, + color='slategray', alpha=1.0, zorder=0) + p_values.append(test_results[i].quantile[1] * 2.0) + if p_values[i] < 10e-5: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='red', markersize=8, zorder=2) + elif p_values[i] >= 10e-5 and p_values[i] < 10e-4: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='#FF7F50', markersize=8, zorder=2) + elif p_values[i] >= 10e-4 and p_values[i] < 10e-3: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='gold', markersize=8, zorder=2) + elif p_values[i] >= 10e-3 and p_values[i] < 0.0125: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='white', markersize=8, zorder=2) + elif p_values[i] >= 0.0125 and p_values[i] < 0.025: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='skyblue', markersize=8, zorder=2) + elif p_values[i] >= 0.025: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='blue', markersize=8, zorder=2) + # Operate on all other consistency tests + else: + for i in range(len(test_results)): + plow97 = numpy.percentile(test_results[i].test_distribution, 2.5) + phigh97 = numpy.percentile(test_results[i].test_distribution, 97.5) + low97 = test_results[i].observed_statistic - plow97 + high97 = phigh97 - test_results[i].observed_statistic + ax.errorbar(test_results[i].observed_statistic, + (len(test_results) - 1) - i, + xerr=numpy.array([[low97, high97]]).T, capsize=4, + color='slategray', alpha=1.0, zorder=0) + p_values.append(test_results[i].quantile) + + if p_values[i] < 10e-5: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', color='red', + markersize=8, zorder=2) + elif p_values[i] >= 10e-5 and p_values[i] < 10e-4: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='#FF7F50', markersize=8, zorder=2) + elif p_values[i] >= 10e-4 and p_values[i] < 10e-3: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', color='gold', + markersize=8, zorder=2) + elif p_values[i] >= 10e-3 and p_values[i] < 0.025: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', color='white', + markersize=8, zorder=2) + elif p_values[i] >= 0.025 and p_values[i] < 0.05: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', + color='skyblue', markersize=8, zorder=2) + elif p_values[i] >= 0.05: + ax.plot(test_results[i].observed_statistic, + (len(test_results) - 1) - i, marker='o', color='blue', + markersize=8, zorder=2) + + legend_elements = [ + matplotlib.lines.Line2D([0], [0], marker='o', color='red', lw=0, + label=r'p < 10e-5', markersize=10, + markeredgecolor='k'), + matplotlib.lines.Line2D([0], [0], marker='o', color='#FF7F50', + lw=0, label=r'10e-5 $\leq$ p < 10e-4', + markersize=10, markeredgecolor='k'), + matplotlib.lines.Line2D([0], [0], marker='o', color='gold', lw=0, + label=r'10e-4 $\leq$ p < 10e-3', + markersize=10, markeredgecolor='k'), + matplotlib.lines.Line2D([0], [0], marker='o', color='white', lw=0, + label=r'10e-3 $\leq$ p < 0.025', + markersize=10, markeredgecolor='k'), + matplotlib.lines.Line2D([0], [0], marker='o', color='skyblue', + lw=0, label=r'0.025 $\leq$ p < 0.05', + markersize=10, markeredgecolor='k'), + matplotlib.lines.Line2D([0], [0], marker='o', color='blue', lw=0, + label=r'p $\geq$ 0.05', markersize=10, + markeredgecolor='k')] + + ax.legend(handles=legend_elements, loc=4, fontsize=13, edgecolor='k') + + return ax + + +def plot_concentration_ROC_diagram(forecast, catalog, linear=True, axes=None, plot_uniform=True, savepdf=True, + savepng=True, show=True, + plot_args=None): + """ + Plot Receiver operating characteristic (ROC) Curves based on forecast and test catalog. + + The ROC is computed following this procedure: + (1) Obtain spatial rates from GriddedForecast + (2) Rank the rates in descending order (highest rates first). + (3) Sort forecasted rates by ordering found in (2), and normalize rates so the cumulative sum equals unity. + (4) Obtain binned spatial rates from observed catalog + (5) Sort gridded observed rates by ordering found in (2), and normalize so the cumulative sum equals unity. + (6) Compute spatial bin areas, sort by ordering found in (2), and normalize so the cumulative sum equals unity. + (7) Plot ordered and cumulated rates (forecast and catalog) against ordered and cumulated bin areas. + + Note that: + (1) The testing catalog and forecast should have exactly the same time-window (duration) + (2) Forecasts should be defined over the same region + (3) If calling this function multiple times, update the color in plot_args + + Args: + forecast (:class: `csep.forecast.GriddedForecast`): + catalog (:class:`AbstractBaseCatalog`): evaluation catalog + linear: (bool): if true, a linear x-axis is used; if false a logarithmic x-axis is used. + axes (:class:`matplotlib.pyplot.ax`): Previously defined ax object + savepdf (str): output option of pdf file + savepng (str): output option of png file + plot_uniform (bool): if true, include uniform forecast on plot + + Optional plotting arguments: + * figsize: (:class:`list`/:class:`tuple`) - default: [9, 8] + * forecast_linecolor: (:class:`str`) - default: 'black' + * title_fontsize: (:class:`float`) Fontsize of the plot title - default: 18 + * forecast_linecolor: (:class:`str`) - default: 'black' + * forecast_linestyle: (:class:`str`) - default: '-' + * observed_linecolor: (:class:`str`) - default: 'blue' + * observed_linestyle: (:class:`str`) - default: '-' + * forecast_label: (:class:`str`) - default: Observed (Forecast) + * legend_fontsize: (:class:`float`) Fontsize of the plot title - default: 16 + * legend_loc: (:class:`str`) - default: 'upper left' + * title_fontsize: (:class:`float`) Fontsize of the plot title - default: 18 + * label_fontsize: (:class:`float`) Fontsize of the plot title - default: 14 + * title: (:class:`str`) - default: 'ROC Curve' + * filename: (:class:`str`) - default: roc_curve. + + Returns: + :class:`matplotlib.pyplot.ax` object + + Raises: + TypeError: throws error if CatalogForecast-like object is provided + RuntimeError: throws error if Catalog and Forecast do not have the same region + + Written by Han Bao, UCLA, March 2021. Modified June 2021. + Modified by Emanuele Biondini, University of Bologna, May 2024. + """ + if not catalog.region == forecast.region: + raise RuntimeError( + "catalog region and forecast region must be identical.") + + # Parse plotting arguments + plot_args = plot_args or {} + figsize = plot_args.get('figsize', (9, 8)) + forecast_linecolor = plot_args.get('forecast_linecolor', 'black') + forecast_linestyle = plot_args.get('forecast_linestyle', '-') + observed_linecolor = plot_args.get('observed_linecolor', 'blue') + observed_linestyle = plot_args.get('observed_linestyle', '-') + legend_fontsize = plot_args.get('legend_fontsize', 16) + legend_loc = plot_args.get('legend_loc', 'upper left') + title_fontsize = plot_args.get('title_fontsize', 18) + label_fontsize = plot_args.get('label_fontsize', 14) + filename = plot_args.get('filename', 'roc_figure') + title = plot_args.get('title', 'Concentration ROC Curve') + + # Plot catalog ordered by forecast rates + name = forecast.name + if not name: + name = '' + else: + name = f'({name})' + + forecast_label = plot_args.get('forecast_label', f'Forecast {name}') + observed_label = plot_args.get('observed_label', f'Observed {name}') + + # Initialize figure + if axes is not None: + ax = axes + else: + fig, ax = pyplot.subplots(figsize=figsize) + + # This part could be vectorized if optimizations become necessary + # Initialize array to store cell area in km^2 + area_km2 = catalog.region.get_cell_area() + obs_counts = catalog.spatial_counts() + + # Obtain rates (or counts) aggregated in spatial cells + # If CatalogForecast, needs expected rates. Might take some time to compute. + rate = forecast.spatial_counts() + + # Get index of rates (descending sort) + I = numpy.argsort(rate) + I = numpy.flip(I) + + # Order forecast and cells rates by highest rate cells first + fore_norm_sorted = numpy.cumsum(rate[I]) / numpy.sum(rate) + area_norm_sorted = numpy.cumsum(area_km2[I]) / numpy.sum(area_km2) + + # Compute normalized and sorted rates of observations + obs_norm_sorted = numpy.cumsum(obs_counts[I]) / numpy.sum(obs_counts) + + # Plot uniform forecast + if plot_uniform: + ax.plot(area_norm_sorted, area_norm_sorted, 'k--', label='Uniform') + + # Plot sorted and normalized forecast (descending order) + ax.plot(area_norm_sorted, fore_norm_sorted, + label=forecast_label, + color=forecast_linecolor, + linestyle=forecast_linestyle) + + # Plot cell-wise rates of observed catalog ordered by forecast rates (descending order) + ax.step(area_norm_sorted, obs_norm_sorted, + label=observed_label, + color=observed_linecolor, + linestyle=observed_linestyle) + + # Plotting arguments + ax.set_ylabel("True Positive Rate", fontsize=label_fontsize) + ax.set_xlabel('False Positive Rate (Normalized Area)', + fontsize=label_fontsize) + if linear==True: + legend_loc=plot_args.get('legend_loc', 'lower right') + elif linear==False: + ax.set_xscale('log') + + ax.legend(loc=legend_loc, shadow=True, fontsize=legend_fontsize) + ax.set_title(title, fontsize=title_fontsize) + + if filename: + if savepdf: + outFile = "{}.pdf".format(filename) + pyplot.savefig(outFile, format='pdf') + if savepng: + outFile = "{}.png".format(filename) + pyplot.savefig(outFile, format='png') + + if show: + pyplot.show() + return ax + + +def plot_ROC_diagram(forecast, catalog, linear=True, axes=None, plot_uniform=True, savepdf=True, savepng=True, show=True, + plot_args=None): + """ + Plot Receiver operating characteristic (ROC) based on forecast and test catalogs using the contingency table. + The ROC is computed following this procedure: + (1) Obtain spatial rates from GriddedForecast and the observed events from the observed catalog. + (2) Rank the rates in descending order (highest rates first). + (3) Sort forecasted rates by ordering found in (2), and normalize rates so their sum is equals unity. + (4) Obtain binned spatial rates from observed catalog + (5) Sort gridded observed rates by ordering found in (2). + (6) Test each ordered and normalized forecasted rate defined in (3) as a threshold value to obtain the + corresponding contingency table. + (7) Define the H (Success rate) and F (False alarm rate) for each threshold soil using the information provided + by the correspondent contingency table defined in (6). + + Note that: + (1) The testing catalog and forecast should have exactly the same time-window (duration) + (2) Forecasts should be defined over the same region + (3) If calling this function multiple times, update the color in plot_args + (4) The user can choose the x-scale (linear or log), see the Args section below + + Args: + forecast (:class: `csep.forecast.GriddedForecast`): + catalog (:class:`AbstractBaseCatalog`): evaluation catalog + linear: (bool): if true, a linear x-axis is used; if false a logarithmic x-axis is used. + axes (:class:`matplotlib.pyplot.ax`): Previously defined ax object + savepdf (str): output option of pdf file + savepng (str): output option of png file + plot_uniform (bool): if true, include uniform forecast on plot + + Optional plotting arguments: + * figsize: (:class:`list`/:class:`tuple`) - default: [9, 8] + * forecast_linestyle: (:class:`str`) - default: '-' + * legend_fontsize: (:class:`float`) Fontsize of the plot title - default: 16 + * legend_loc: (:class:`str`) - default: 'upper left' + * title_fontsize: (:class:`float`) Fontsize of the plot title - default: 18 + * label_fontsize: (:class:`float`) Fontsize of the plot title - default: 14 + * title: (:class:`str`) - default: ROC Curve from contingency table + * filename: (:class:`str`) - default: contingency_roc_figure. + + Returns: + :class:`matplotlib.pyplot.ax` object + + Raises: + TypeError: throws error if CatalogForecast-like object is provided + RuntimeError: throws error if Catalog and Forecast do not have the same region + + Written by Emanuele Biondini, UNIBO, March 2023. + """ + if not catalog.region == forecast.region: + raise RuntimeError( + "catalog region and forecast region must be identical.") + + # Parse plotting arguments + plot_args = plot_args or {} + figsize = plot_args.get('figsize', (9, 8)) + forecast_linestyle = plot_args.get('forecast_linestyle', '-') + legend_fontsize = plot_args.get('legend_fontsize', 16) + legend_loc = plot_args.get('legend_loc', 'upper left') + title_fontsize = plot_args.get('title_fontsize', 16) + label_fontsize = plot_args.get('label_fontsize', 14) + title = plot_args.get('title', 'ROC Curve from contingency table') + filename = plot_args.get('filename', 'contingency_roc_figure') + + # Initialize figure + if axes is not None: + ax = axes + else: + fig, ax = pyplot.subplots(figsize=figsize) + + name = forecast.name + if not name: + name = '' + else: + name = f'{name}' + + forecast_label = plot_args.get('forecast_label', f'{name}') + observed_label = plot_args.get('observed_label', f'{name}') + + # Obtain forecast rates (or counts) and observed catalog aggregated in spatial cells + rate = forecast.spatial_counts() + obs_counts = catalog.spatial_counts() + # Define the threshold to be analysed to compile the contingency table and draw the ROC curve + + # Get index of rates (descending sort) + I = numpy.argsort(rate) + I = numpy.flip(I) + + # Order forecast and cells rates by highest rate cells first and normalize the rates. + thresholds = (rate[I]) / numpy.sum(rate) + obs_counts = obs_counts[I] + + Table_ROC = pandas.DataFrame({ + 'Threshold': [], + 'Successful_bins': [], + 'Obs_active_bins': [], + 'H': [], + 'F': [] + + }) + + #Each forecasted and normalized rate are tested as a threshold value to define the contingency table. + for threshold in thresholds: + threshold = float(threshold) + + binary_forecast = numpy.where(thresholds >= threshold, 1, 0) + forecastedYes_observedYes = obs_counts[(binary_forecast == 1) & (obs_counts > 0)] + forecastedYes_observedNo=obs_counts[(binary_forecast == 1) & (obs_counts == 0)] + forecastedNo_observedYes=obs_counts[(binary_forecast == 0) & (obs_counts > 0)] + forecastedNo_observedNo = obs_counts[(binary_forecast == 0) & (obs_counts == 0)] + # Creating the DataFrame for the contingency table + data = { + "Observed": [len(forecastedYes_observedYes), len(forecastedNo_observedYes)], + "Not Observed": [len(forecastedYes_observedNo), len(forecastedNo_observedNo)] + } + index = ["Forecasted", "Not Forecasted"] + contingency_df = pandas.DataFrame(data, index=index) + + H = contingency_df.loc['Forecasted', 'Observed'] / ( + contingency_df.loc['Forecasted', 'Observed'] + contingency_df.loc['Not Forecasted', 'Observed']) + F = contingency_df.loc['Forecasted', 'Not Observed'] / ( + contingency_df.loc['Forecasted', 'Not Observed'] + contingency_df.loc[ + 'Not Forecasted', 'Not Observed']) + + threshold_row = { + 'Threshold': threshold, + 'Successful_bins': contingency_df.loc['Forecasted', 'Observed'], + 'Obs_active_bins': contingency_df['Observed'].sum(), + 'H': H, + 'F': F + + } + threshold_row_df = pandas.DataFrame([threshold_row]) + + # Concatena threshold_row_df a Table_molchan + Table_ROC = pandas.concat([Table_ROC, threshold_row_df], ignore_index=True) + + # to start the trajecroy in the poin (0,0) + first_row = pandas.DataFrame({'H': [0], 'F': [0]}) + Table_ROC = pandas.concat([first_row, Table_ROC], ignore_index=True) + + # Plot the ROC curve + ax.plot((Table_ROC['F']), (Table_ROC['H']), + label=forecast_label, + color='black', + linestyle=forecast_linestyle) + + # Plot uniform forecast + if plot_uniform: + x_uniform = numpy.arange(0, 1.001, 0.001) + y_uniform = numpy.arange(0, 1.001, 0.001) + ax.plot(x_uniform, y_uniform, linestyle='--', color='gray', label='SUP') + # Plotting arguments + ax.set_ylabel("Hit Rate", fontsize=label_fontsize) + ax.set_xlabel('Fraction of false alarms', fontsize=label_fontsize) + + if linear==True: + pass + elif linear==False: + ax.set_xscale('log') + + ax.set_yscale('linear') + ax.tick_params(axis='x', labelsize=label_fontsize) + ax.tick_params(axis='y', labelsize=label_fontsize) + ax.legend(loc=legend_loc, shadow=True, fontsize=legend_fontsize) + ax.set_title(title, fontsize=title_fontsize) + + + if filename: + if savepdf: + outFile = "{}.pdf".format(filename) + pyplot.savefig(outFile, format='pdf') + if savepng: + outFile = "{}.png".format(filename) + pyplot.savefig(outFile,format='png') + + if show: + pyplot.show() + return ax + + +def plot_Molchan_diagram(forecast, catalog, linear=True, axes=None, plot_uniform=True, savepdf=True, savepng=True, + show=True, + plot_args=None): + """ + Plot the Molchan Diagram based on forecast and test catalogs using the contingency table. + The Area Skill score and its error are shown in the legend + + The Molchan diagram is computed following this procedure: + (1) Obtain spatial rates from GriddedForecast and the observed events from the observed catalog. + (2) Rank the rates in descending order (highest rates first). + (3) Sort forecasted rates by ordering found in (2), and normalize rates so their sum is equals unity. + (4) Obtain binned spatial rates from observed catalog + (5) Sort gridded observed rates by ordering found in (2). + (6) Test each ordered and normalized forecasted rate defined in (3) as a threshold value to obtain the + corresponding contingency table. + (7) Define the "nu" (Miss rate) and "tau" (Fraction of spatial alarmed cells) for each threshold soil using the + information provided by the correspondent contingency table defined in (6). + + + Note that: + (1) The testing catalog and forecast should have exactly the same time-window (duration) + (2) Forecasts should be defined over the same region + (3) If calling this function multiple times, update the color in plot_args + (4) The user can choose the x-scale (linear or log), see the Args section below + + Args: + forecast (:class: `csep.forecast.GriddedForecast`): + catalog (:class:`AbstractBaseCatalog`): evaluation catalog + linear: (bool): if true, a linear x-axis is used; if false a logarithmic x-axis is used. + axes (:class:`matplotlib.pyplot.ax`): Previously defined ax object + savepdf (str): output option of pdf file + savepng (str): output option of png file + plot_uniform (bool): if true, include uniform forecast on plot + + Optional plotting arguments: + * figsize: (:class:`list`/:class:`tuple`) - default: [9, 8] + * forecast_linestyle: (:class:`str`) - default: '-' + * legend_fontsize: (:class:`float`) Fontsize of the plot title - default: 16 + * legend_loc: (:class:`str`) - default: 'lower left' + * title_fontsize: (:class:`float`) Fontsize of the plot title - default: 18 + * label_fontsize: (:class:`float`) Fontsize of the plot title - default: 14 + * title: (:class:`str`) - default: 'Molchan diagram' + * filename: (:class:`str`) - default: molchan_diagram. + + Returns: + :class:`matplotlib.pyplot.ax` object + + Raises: + TypeError: throws error if CatalogForecast-like object is provided + RuntimeError: throws error if Catalog and Forecast do not have the same region + + Written by Emanuele Biondini, UNIBO, March 2023. + """ + if not catalog.region == forecast.region: + raise RuntimeError( + "catalog region and forecast region must be identical.") + + # Parse plotting arguments + plot_args = plot_args or {} + figsize = plot_args.get('figsize', (9, 8)) + forecast_linestyle = plot_args.get('forecast_linestyle', '-') + legend_fontsize = plot_args.get('legend_fontsize', 16) + legend_loc = plot_args.get('legend_loc', 'lower left') + title_fontsize = plot_args.get('title_fontsize', 16) + label_fontsize = plot_args.get('label_fontsize', 14) + title = plot_args.get('title', '') + filename = plot_args.get('filename', 'molchan_figure') + + # Initialize figure + if axes is not None: + ax = axes + else: + fig, ax = pyplot.subplots(figsize=figsize) + + name = forecast.name + if not name: + name = '' + else: + name = f'{name}' + + forecast_label = plot_args.get('forecast_label', f'{name}') + observed_label = plot_args.get('observed_label', f'{name}') + + # Obtain forecast rates (or counts) and observed catalog aggregated in spatial cells + rate = forecast.spatial_counts() + obs_counts = catalog.spatial_counts() + + # Define the threshold to be analysed tp draw the Molchan diagram + + # Get index of rates (descending sort) + I = numpy.argsort(rate) + I = numpy.flip(I) + + # Order forecast and cells rates by highest rate cells first + thresholds = (rate[I]) / numpy.sum(rate) + obs_counts = obs_counts[I] + + Table_molchan = pandas.DataFrame({ + 'Threshold': [], + 'Successful_bins': [], + 'Obs_active_bins': [], + 'tau': [], + 'nu': [], + 'R_score': [] + }) + + # Each forecasted and normalized rate are tested as a threshold value to define the contingency table. + for threshold in thresholds: + threshold = float(threshold) + + binary_forecast = numpy.where(thresholds >= threshold, 1, 0) + forecastedYes_observedYes = obs_counts[(binary_forecast == 1) & (obs_counts > 0)] + forecastedYes_observedNo = obs_counts[(binary_forecast == 1) & (obs_counts == 0)] + forecastedNo_observedYes = obs_counts[(binary_forecast == 0) & (obs_counts > 0)] + forecastedNo_observedNo = obs_counts[(binary_forecast == 0) & (obs_counts == 0)] + # Creating the DataFrame for the contingency table + data = { + "Observed": [len(forecastedYes_observedYes), len(forecastedNo_observedYes)], + "Not Observed": [len(forecastedYes_observedNo), len(forecastedNo_observedNo)] + } + index = ["Forecasted", "Not Forecasted"] + contingency_df = pandas.DataFrame(data, index=index) + nu = contingency_df.loc['Not Forecasted', 'Observed'] / contingency_df['Observed'].sum() + tau = contingency_df.loc['Forecasted'].sum() / (contingency_df.loc['Forecasted'].sum() + + contingency_df.loc['Not Forecasted'].sum()) + R_score = (contingency_df.loc['Forecasted', 'Observed'] / contingency_df['Observed'].sum()) - \ + (contingency_df.loc['Forecasted', 'Not Observed'] / contingency_df['Not Observed'].sum()) + + threshold_row = { + 'Threshold': threshold, + 'Successful_bins': contingency_df.loc['Forecasted', 'Observed'], + 'Obs_active_bins': contingency_df['Observed'].sum(), + 'tau': tau, + 'nu': nu, + 'R_score': R_score, + + } + threshold_row_df = pandas.DataFrame([threshold_row]) + + Table_molchan = pandas.concat([Table_molchan, threshold_row_df], ignore_index=True) + + bottom_row = {'Threshold': 'Full alarms', 'tau': 1, 'nu': 0, 'Obs_active_bins': contingency_df['Observed'].sum()} + top_row = {'Threshold': 'No alarms', 'tau': 0, 'nu': 1, 'Obs_active_bins': contingency_df['Observed'].sum()} + + Table_molchan = pandas.concat([pandas.DataFrame([top_row]), Table_molchan], ignore_index=True) + + Table_molchan = pandas.concat([Table_molchan, pandas.DataFrame([bottom_row])], ignore_index=True) + + # Computation of Area Skill score (ASS) + Tab_as_score = pandas.DataFrame() + + Tab_as_score['Threshold'] = Table_molchan['Threshold'] + Tab_as_score['tau'] = Table_molchan['tau'] + Tab_as_score['nu'] = Table_molchan['nu'] + + ONE = numpy.ones(len(Tab_as_score)) + Tab_as_score['CUM_BAND'] = cumulative_trapezoid(ONE, Tab_as_score['tau'], initial=0) - cumulative_trapezoid(Tab_as_score['nu'], + Tab_as_score['tau'], + initial=0) + Tab_as_score['AS_score'] = numpy.divide(Tab_as_score['CUM_BAND'], + cumulative_trapezoid(ONE, Tab_as_score['tau'], initial=0) + 1e-10) + Tab_as_score.loc[Tab_as_score.index[-1], 'AS_score'] = max(0.5, Tab_as_score['AS_score'].iloc[-1]) + ASscore = numpy.round(Tab_as_score.loc[Tab_as_score.index[-1], 'AS_score'], 2) + + bin = 0.01 + import math + devstd = numpy.sqrt(1 / (12 * Table_molchan['Obs_active_bins'].iloc[0])) + devstd = devstd * bin ** -1 + devstd = math.ceil(devstd + 0.5) + devstd = devstd / bin ** -1 + Tab_as_score['st_dev'] = devstd + Tab_as_score['st_dev'] = devstd + dev_std = numpy.round(devstd, 2) + + # Plot the Molchan trajectory + + ax.plot(Table_molchan['tau'], Table_molchan['nu'], + label=f"{forecast_label}, ASS={ASscore}±{dev_std} ", + color='black', + linestyle=forecast_linestyle) + + # Plot uniform forecast + if plot_uniform: + x_uniform = numpy.arange(0, 1.001, 0.001) + y_uniform = numpy.arange(1.00, -0.001, -0.001) + ax.plot(x_uniform, y_uniform, linestyle='--', color='gray', label='SUP' + ' ASS=0.50') + + # Plotting arguments + ax.set_ylabel("Miss Rate", fontsize=label_fontsize) + ax.set_xlabel('Fraction of area occupied by alarms', fontsize=label_fontsize) + + if linear == True: + legend_loc = plot_args.get('legend_loc', 'upper right') + elif linear == False: + ax.set_xscale('log') + + ax.tick_params(axis='x', labelsize=label_fontsize) + ax.tick_params(axis='y', labelsize=label_fontsize) + ax.legend(loc=legend_loc, shadow=True, fontsize=legend_fontsize) + ax.set_title(title, fontsize=title_fontsize) + + if filename: + if savepdf: + outFile = "{}.pdf".format(filename) + pyplot.savefig(outFile, format='pdf') + if savepng: + outFile = "{}.png".format(filename) + pyplot.savefig(outFile, format='png') + + if show: + pyplot.show() + return ax + + +def _get_marker_style(obs_stat, p, one_sided_lower): + """Returns matplotlib marker style as fmt string""" + if obs_stat < p[0] or obs_stat > p[1]: + # red circle + fmt = 'ro' + else: + # green square + fmt = 'gs' + if one_sided_lower: + if obs_stat < p[0]: + fmt = 'ro' + else: + fmt = 'gs' + return fmt + + +def _get_marker_t_color(distribution): + """Returns matplotlib marker style as fmt string""" + if distribution[0] > 0. and distribution[1] > 0.: + fmt = 'green' + elif distribution[0] < 0. and distribution[1] < 0.: + fmt = 'red' + else: + fmt = 'grey' + + return fmt + + +def _get_marker_w_color(distribution, percentile): + """Returns matplotlib marker style as fmt string""" + + if distribution < (1 - percentile / 100): + fmt = True + else: + fmt = False + + return fmt + + +def _get_axis_limits(pnts, border=0.05): + """Returns a tuple of x_min and x_max given points on plot.""" + x_min = numpy.min(pnts) + x_max = numpy.max(pnts) + xd = (x_max - x_min) * border + return (x_min - xd, x_max + xd) + + +def _get_basemap(basemap): + if basemap == 'stamen_terrain': + tiles = img_tiles.Stamen('terrain') + elif basemap == 'stamen_terrain-background': + tiles = img_tiles.Stamen('terrain-background') + elif basemap == 'google-satellite': + tiles = img_tiles.GoogleTiles(style='satellite') + elif basemap == 'ESRI_terrain': + webservice = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Terrain_Base/' \ + 'MapServer/tile/{z}/{y}/{x}.jpg' + tiles = img_tiles.GoogleTiles(url=webservice) + elif basemap == 'ESRI_imagery': + webservice = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/' \ + 'MapServer/tile/{z}/{y}/{x}.jpg' + tiles = img_tiles.GoogleTiles(url=webservice) + elif basemap == 'ESRI_relief': + webservice = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Shaded_Relief/' \ + 'MapServer/tile/{z}/{y}/{x}.jpg' + tiles = img_tiles.GoogleTiles(url=webservice) + elif basemap == 'ESRI_topo': + webservice = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Shaded_Relief/' \ + 'MapServer/tile/{z}/{y}/{x}.jpg' + tiles = img_tiles.GoogleTiles(url=webservice) + else: + try: + webservice = basemap + tiles = img_tiles.GoogleTiles(url=webservice) + except: + raise ValueError('Basemap type not valid or not implemented') + + return tiles + + +
+[docs] +def add_labels_for_publication(figure, style='bssa', labelsize=16): + """ Adds publication labels too the outside of a figure. """ + all_axes = figure.get_axes() + ascii_iter = iter(string.ascii_lowercase) + for ax in all_axes: + # check for colorbar and ignore for annotations + if ax.get_label() == 'Colorbar': + continue + annot = next(ascii_iter) + if style == 'bssa': + ax.annotate(f'({annot})', (0.025, 1.025), xycoords='axes fraction', + fontsize=labelsize) + + return
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/csep/utils/stats.html b/_modules/csep/utils/stats.html new file mode 100644 index 00000000..7ffb995b --- /dev/null +++ b/_modules/csep/utils/stats.html @@ -0,0 +1,479 @@ + + + + + + + + csep.utils.stats — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep.utils.stats

+import numpy
+import scipy.stats
+import scipy.special
+# PyCSEP imports
+from csep.core import regions
+
+
+[docs] +def sup_dist(cdf1, cdf2): + """ + given two cumulative distribution functions, compute the supremum of the set of absolute distances. + + note: + this function does not check that the ecdfs are ordered or balanced. beware! + """ + return numpy.max(numpy.absolute(cdf2 - cdf1))
+ + +
+[docs] +def sup_dist_na(data1, data2): + """ + computes the ks statistic for two ecdfs that are not necessarily aligned on the same values. performs this + operation by merging the two datasets together. this is taken from the 2sample ks test in the scipy codebase + + Args: + data1: (numpy array like) + data2: (numpy array like) + + Returns: + ks: sup dist from the two cdf functions + """ + data1, data2 = map(numpy.asarray, (data1, data2)) + n1 = len(data1) + n2 = len(data2) + data1 = numpy.sort(data1) + data2 = numpy.sort(data2) + data_all = numpy.concatenate([data1,data2]) + cdf1 = numpy.searchsorted(data1,data_all,side='right')/(1.0*n1) + cdf2 = (numpy.searchsorted(data2,data_all,side='right'))/(1.0*n2) + d = numpy.max(numpy.absolute(cdf1-cdf2)) + return d
+ + +
+[docs] +def cumulative_square_diff(cdf1, cdf2): + """ + given two cumulative distribution functions, compute the cumulative sq. diff of the set of distances. + + note: + this function does not check that the ecdfs are ordered or balanced. beware! + + Args: + cdf1: ndarray + cdf2: ndarray + + Returns: + cum_dist: scalar distance metric for the histograms + + """ + return numpy.sum((cdf2 - cdf1)**2)
+ + +
+[docs] +def binned_ecdf(x, vals): + """ + returns the statement P(X ≤ x) for val in vals. + vals must be monotonically increasing and unqiue. + + returns: + tuple: sorted vals, and ecdf computed at vals + """ + # precompute ecdf for x: returns(sorted(x), ecdf()) + if len(x) == 0: + return None + ex, ey = ecdf(x) + cdf = numpy.array(list(map(lambda val: less_equal_ecdf(x, val, cdf=(ex, ey)), vals))) + return vals, cdf
+ + +
+[docs] +def ecdf(x): + """ + Compute the ecdf of vector x. This does not contain zero, should be equal to 1 in the last value + to satisfy F(x) == P(X ≤ x). + + Args: + x (numpy.array): vector of values + + Returns: + xs (numpy.array), ys (numpy.array) + """ + xs = numpy.sort(x) + ys = numpy.arange(1, len(x) + 1) / float(len(x)) + return xs, ys
+ + +
+[docs] +def greater_equal_ecdf(x, val, cdf=()): + """ + Given val return P(x ≥ val). + + Args: + x (numpy.array): set of values + val (float): value + ecdf (tuple): ecdf of x, should be tuple (sorted(x), ecdf(x)) + + Returns: + (float): probability that x ≤ val + """ + x = numpy.asarray(x) + if x.shape[0] == 0: + return None + if not cdf: + ex, ey = ecdf(x) + else: + ex, ey = cdf + + eyc = ey[::-1] + + # some short-circuit cases for discrete distributions; x is sorted, but reversed. + if val > ex[-1]: + return 0.0 + if val < ex[0]: + return 1.0 + return eyc[numpy.searchsorted(ex, val)]
+ + +
+[docs] +def less_equal_ecdf(x, val, cdf=()): + """ + Given val return P(x ≤ val). + + Args: + x (numpy.array): set of values + val (float): value + + Returns: + (float): probability that x ≤ val + """ + x = numpy.asarray(x) + if x.shape[0] == 0: + return None + if not cdf: + ex, ey = ecdf(x) + else: + ex, ey = cdf + # some short-circuit cases for discrete distributions + if val > ex[-1]: + return 1.0 + if val < ex[0]: + return 0.0 + # uses numpy implementation of binary search + return ey[numpy.searchsorted(ex, val, side='right') - 1]
+ + +
+[docs] +def min_or_none(x): + """ + Given an array x, returns the min value. If x = [], returns None. + """ + if len(x) == 0: + return None + else: + return numpy.min(x)
+ + +
+[docs] +def max_or_none(x): + """ + Given an array x, returns the max value. If x = [], returns None. + """ + if len(x) == 0: + return None + else: + return numpy.max(x)
+ + +
+[docs] +def get_quantiles(sim_counts, obs_count): + """ Computes delta1 and delta2 quantile scores from empirical distribution and observation """ + # delta 1 prob of observation at least n_obs events given the forecast + delta_1 = greater_equal_ecdf(sim_counts, obs_count) + # delta 2 prob of observing at most n_obs events given the catalog + delta_2 = less_equal_ecdf(sim_counts, obs_count) + return delta_1, delta_2
+ + +
+[docs] +def poisson_log_likelihood(observation, forecast): + """ Wrapper around scipy to compute the Poisson log-likelihood + + Args: + observation: Observed (Grided) seismicity + forecast: Forecast of a Model (Grided) + + Returns: + Log-Liklihood values of between binned-observations and binned-forecasts + """ + return numpy.log(scipy.stats.poisson.pmf(observation, forecast))
+ + +
+[docs] +def poisson_joint_log_likelihood_ndarray(target_event_log_rates, target_observations, n_fore): + """ Efficient calculation of joint log-likelihood of grid-based forecast. + + Note: log(w!) = 0 + + Args: + target_event_log_rates: natural log of bin rates where target events occurred + target_observations: counts of target events + n_fore: expected number from the forecasts + + Returns: + joint_log_likelihood + + """ + sum_log_target_event_rates = numpy.sum(target_event_log_rates) + # factorial(n) = loggamma(n+1) + discrete_penalty_term = numpy.sum(scipy.special.loggamma(target_observations+1)) + return sum_log_target_event_rates - discrete_penalty_term - n_fore
+ + +
+[docs] +def poisson_inverse_cdf(random_matrix, lam): + """ Wrapper around scipy inverse poisson cdf function + + Args: + random_matrix: Matrix of dimenions equal to forecast, containing random + numbers between 0 and 1. + lam: vector of parameters for poisson distribution + + Returns: + sample from the poisson distribution + """ + return scipy.stats.poisson.ppf(random_matrix, lam)
+ + +def get_Kagan_I1_score(forecasts, catalog): + """ + A program for scoring (I_1) earthquake-forecast grids by the methods of: + Kagan, Yan Y. [2009] Testing long-term earthquake forecasts: likelihood methods + and error diagrams, Geophys. J. Int., v. 177, pages 532-542. + + Some advantages of these methods are that they: + - are insensitive to the grid used to cover the Earth; + - are insensitive to changes in the overall seismicity rate; + - do not modify the locations or magnitudes of test earthquakes; + - do not require simulation of virtual catalogs; + - return relative quality measures, not just "pass" or "fail;" and + - indicate relative specificity of forecasts as well as relative success. + + Written by Han Bao, UCLA, March 2021. Modified June 2021 + + Note that: + (1) The testing catalog and forecast should have exactly the same time-window (duration) + (2) Forecasts and catalogs have identical regions + + Args: + forecasts: csep.forecast or a list of csep.forecast (one catalog to test against different forecasts) + catalog: csep.catalog + + Returns: + I_1 (numpy.array): containing I1 for each forecast in inputs + + """ + ### Determine if input 'forecasts' is a list of csep.forecasts or a single csep.forecasts + try: + n_forecast = len(forecasts) # the input forecasts is a list of csep.forecast + except: + n_forecast = 1 # the input forecasts is a single csep.forecast + forecasts = [forecasts] + + # Sanity checks can go here + for forecast in forecasts: + if forecast.region != catalog.region: + raise RuntimeError("Catalog and forecasts must have identical regions.") + + # Initialize array + I_1 = numpy.zeros(n_forecast, dtype=numpy.float64) + + # Compute cell areas + area_km2 = catalog.region.get_cell_area() + total_area = numpy.sum(area_km2) + + for j, forecast in enumerate(forecasts): + # Eq per cell per duration in forecast; note, if called on a CatalogForecast this could require computed expeted rates + rate = forecast.spatial_counts() + # Get Rate Density and uniform_forecast of the Forecast + rate_den = rate / area_km2 + uniform_forecast = numpy.sum(rate) / total_area + # Compute I_1 score + n_event = catalog.event_count + counts = catalog.spatial_counts() + non_zero_idx = numpy.argwhere(rate_den > 0) + non_zero_idx = non_zero_idx[:,0] + I_1[j] = numpy.dot(counts[non_zero_idx], numpy.log2(rate_den[non_zero_idx] / uniform_forecast)) / n_event + + return I_1 +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/csep/utils/time_utils.html b/_modules/csep/utils/time_utils.html new file mode 100644 index 00000000..f2e40c5f --- /dev/null +++ b/_modules/csep/utils/time_utils.html @@ -0,0 +1,483 @@ + + + + + + + + csep.utils.time_utils — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for csep.utils.time_utils

+import calendar
+import datetime
+import re
+import os
+import warnings
+from csep.utils.constants import SECONDS_PER_ASTRONOMICAL_YEAR, SECONDS_PER_DAY
+
+
+
+[docs] +def epoch_time_to_utc_datetime(epoch_time_milli): + """ + Accepts an epoch_time in milliseconds the UTC timezone and returns a python datetime object. + + See https://docs.python.org/3/library/datetime.html#datetime.datetime.fromtimestamp for information + about how timezones are handled with this function. + + :param epoch_time: epoch_time in UTC timezone in milliseconds + :type epoch_time: float + """ + if epoch_time_milli is None: + return epoch_time_milli + + epoch_time = epoch_time_milli / 1000 + + if os.name == "nt" and epoch_time < 0: + + if isinstance(epoch_time, int): + sec = epoch_time + milli_sec = 0 + else: + whole, frac = str(epoch_time).split(".") + sec = int(whole) + milli_sec = int(frac) * -1 + dt = datetime.datetime(1970, 1, 1) + datetime.timedelta( + seconds=sec, + milliseconds=milli_sec + ) + else: + dt = datetime.datetime.fromtimestamp(epoch_time, datetime.timezone.utc) + + return dt
+ + +
+[docs] +def datetime_to_utc_epoch(dt): + """ + Converts python datetime.datetime into epoch_time in milliseconds. + + + Args: + dt (datetime.datetime): python datetime object, should be naive. + """ + if dt is None: + return dt + + if dt.tzinfo is None: + dt=dt.replace(tzinfo=datetime.timezone.utc) + + if str(dt.tzinfo) != 'UTC': + raise ValueError(f"Timezone info must be UTC. tzinfo={dt.tzinfo}") + + epoch = datetime.datetime(1970, 1, 1, 0, 0, 0, 0).replace(tzinfo=datetime.timezone.utc) + epoch_time_seconds = (dt - epoch).total_seconds() + return int(1000.0 * epoch_time_seconds)
+ + +
+[docs] +def millis_to_days(millis): + """ Converts time in millis to days """ + return millis / SECONDS_PER_DAY / 1000
+ + +
+[docs] +def days_to_millis(days): + """ Converts days to millis """ + return days * SECONDS_PER_DAY * 1000
+ + +
+[docs] +def strptime_to_utc_epoch(time_string, format="%Y-%m-%d %H:%M:%S.%f"): + """ Returns epoch time from formatted time string """ + if format == "%Y-%m-%d %H:%M:%S.%f": + format = parse_string_format(time_string) + dt = strptime_to_utc_datetime(time_string, format) + return datetime_to_utc_epoch(dt)
+ + +
+[docs] +def timedelta_from_years(time_in_years): + """ + Returns python datetime.timedelta object based on the astronomical year in seconds. + + Args: + time_in_years: positive fraction of years 0 <= time_in_years + """ + if time_in_years < 0: + raise ValueError("time_in_years must be greater than zero.") + + seconds = SECONDS_PER_ASTRONOMICAL_YEAR * time_in_years + time_delta = datetime.timedelta(seconds=seconds) + return time_delta
+ + +
+[docs] +def strptime_to_utc_datetime(time_string, format="%Y-%m-%d %H:%M:%S.%f"): + """ + Converts time_string with format into time-zone aware datetime object in the UTC timezone. + + Note: + If the time_string is not in UTC time, it will be converted into UTC timezone. + + Args: + time_string (str): string representation of datetime + format (str): format of time_string + + Returns: + datetime.datetime: timezone aware (utc) object from time_string + """ + # if default format is provided, try and handle some annoying cases with fractional seconds and time-zone info + if format == "%Y-%m-%d %H:%M:%S.%f": + format = parse_string_format(time_string) + dt = datetime.datetime.strptime(time_string, format).replace(tzinfo=datetime.timezone.utc) + return dt
+ + +
+[docs] +def utc_now_datetime(): + """ Returns current datetime """ + return datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc)
+ + +
+[docs] +def utc_now_epoch(): + """ Returns current epoch time """ + return datetime_to_utc_epoch(datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc))
+ + +
+[docs] +def create_utc_datetime(datetime): + """Creates TZAware UTC datetime object from unaware object.""" + assert datetime.tzinfo is None + return datetime.replace(tzinfo=datetime.timezone.utc)
+ + +def parse_string_format(time_string): + """ Fixes some difficulties with different time formats """ + format = "%Y-%m-%d %H:%M:%S" + if '.' in time_string: + format = "%Y-%m-%d %H:%M:%S.%f" + if time_string[-6] == '+': + format = format + "%z" + return format + +class Specifier(str): + """Model %Y and such in `strftime`'s format string.""" + def __new__(cls, *args): + self = super(Specifier, cls).__new__(cls, *args) + assert self.startswith('%') + assert len(self) == 2 + self._regex = re.compile(r'(%*{0})'.format(str(self))) + return self + + def ispresent_in(self, format): + m = self._regex.search(format) + return m and m.group(1).count('%') & 1 # odd number of '%' + + def replace_in(self, format, by): + def repl(m): + n = m.group(1).count('%') + if n & 1: # odd number of '%' + prefix = '%' * (n - 1) if n > 0 else '' + return prefix + str(by) # replace format + else: + return m.group(0) # leave unchanged + return self._regex.sub(repl, format) + +class HistoricTime(datetime.datetime): + + def strftime(self, format): + year = self.year + if year >= 1900: + return super(HistoricTime, self).strftime(format) + assert year < 1900 + factor = (1900 - year - 1) // 400 + 1 + future_year = year + factor * 400 + assert future_year > 1900 + + format = Specifier('%Y').replace_in(format, year) + result = self.replace(year=future_year).strftime(format) + if any(f.ispresent_in(format) for f in map(Specifier, ['%c', '%x'])): + msg = "'%c', '%x' produce unreliable results for year < 1900" + warnings.warn(msg) + result = result.replace(str(future_year), str(year)) + assert (future_year % 100) == (year % + 100) # last two digits are the same + return result + +
+[docs] +def decimal_year(test_date): + """ Convert given test date to the decimal year representation. + + Repurposed from CSEP1 Author: Masha Liukis + + Args: + test_date (datetime.datetime) + """ + + if test_date is None: + return None + + # This implementation is based on the Matlab version of the 'decyear' + # function that was inherited from RELM project + hours_per_day = 24.0 + mins_per_day = hours_per_day * 60.0 + secs_per_day = mins_per_day * 60.0 + + # Get number of days in the year of specified test date + num_days_per_year = 365.0 + if calendar.isleap(test_date.year): + num_days_per_year = 366.0 + + # Compute number of days in months preceding the test date + # (excluding the month of the test date) + num_days = sum([calendar.monthrange(test_date.year, i)[1] for i in range(1, test_date.month)]) + + dec_year = test_date.year + (num_days + (test_date.day - 1) + + test_date.hour / hours_per_day + + test_date.minute / mins_per_day + + (test_date.second + test_date.microsecond * 1e-6) / secs_per_day) / num_days_per_year + return dec_year
+ + +def decimal_year_to_utc_datetime(decimal_date): + """ Takes a year specified as a decimal year and returns a datetime object. + + Args: + decimal_year: decimal year in format YEAR.YEAR_FRACTION. This should be account for leap-years + """ + # def days_in_month(year, month): + # return calendar.monthrange(int(year), int(month))[1] + # + # def split_fractional_time(value, time_part): + # whole = (value * time_part) // 1 + # part = (value * time_part) % 1 + # return whole, part + # + # # start with years (first we just want to split year and decimal part, hence the 1) + # # if i were less lazy this could be a recursive function with a queue or stack + # year, year_part = split_fractional_time(decimal_date, 1.0) + # # move to months + # months_per_year = 12 + # month, month_part = split_fractional_time(year_part, months_per_year) + # # compute days + # days_per_month = days_in_month(year, month) + # day, day_part = split_fractional_time(month_part, days_per_month) + # # hours + # hours_per_day = 24 + # hour, hour_part = split_fractional_time(day_part, hours_per_day) + # # minutes + # minute_per_day = 60 + # minute, minute_part = split_fractional_time(hour_part, minute_per_day) + # # seconds + # seconds_per_minute = 60 + # second, second_part = split_fractional_time(minute_part, seconds_per_minute) + # # finally build date time + # return datetime.datetime(int(year), int(month), int(day), int(hour), int(minute), int(second), int(microsecond)) + + + # Get number of days in the year of specified test date + year = decimal_date // 1 + year_frac = decimal_date % 1 + num_days_per_year = 365.0 + if calendar.isleap(year): + num_days_per_year = 366.0 + num_hours_per_day = 24 + num_minutes_per_hour = 60 + num_seconds_per_minute = 60 + num_microseconds_per_second = 1e6 + microseconds_per_year = num_days_per_year * \ + num_hours_per_day * \ + num_minutes_per_hour * \ + num_seconds_per_minute * \ + num_microseconds_per_second + microseconds_into_year = microseconds_per_year * year_frac + # create time delta from microseconds + td = datetime.timedelta(microseconds=microseconds_into_year) + # create datetime for start of year + dt = datetime.datetime(int(year), 1, 1, 0, 0, 0, 0) + # combine to get datetime representation of date + final_dt = (dt + td).replace(tzinfo=datetime.timezone.utc) + return final_dt + +def decimal_year_to_utc_epoch(decimal_date): + """ Converts decimal year to epoch year format used by catalogs. + + Args: + decimal_date (float): date with format YEAR.X where 'X' is the fraction of the year. The fraction + considers leap years. + + Returns: + epoch_time (int): time elapsed since jan 01, 1970 + """ + return datetime_to_utc_epoch(decimal_year_to_utc_datetime(decimal_date)) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/index.html b/_modules/index.html new file mode 100644 index 00000000..d2cf6316 --- /dev/null +++ b/_modules/index.html @@ -0,0 +1,179 @@ + + + + + + + + Overview: module code — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/_sources/concepts/catalogs.rst.txt b/_sources/concepts/catalogs.rst.txt new file mode 100644 index 00000000..fb351d83 --- /dev/null +++ b/_sources/concepts/catalogs.rst.txt @@ -0,0 +1,354 @@ +.. _catalogs-reference: + +######## +Catalogs +######## + +PyCSEP provides routines for working with and manipulating earthquake catalogs for the purposes of evaluating earthquake +forecasting models. + +If you are able to make use of these tools for other reasons, please let us know. We are especially interested in +including basic catalog statistics into this package. If you are interested in helping implement routines like +b-value estimation and catalog completeness that would be much appreciated. + +.. contents:: Table of Contents + :local: + :depth: 2 + +************ +Introduction +************ + +PyCSEP catalog basics +===================== + +An earthquake catalog contains a collection of seismic events each defined by a set of attributes. PyCSEP implements +a simple event description that is suitable for evaluating earthquake forecasts. In this format, every seismic event is +defined by its location in space (longitude, latitude, and depth), magnitude, and origin time. In addition, each event +can have an optional ``event_id`` as a unique identifier. + +PyCSEP provides :class:`csep.core.catalogs.CSEPCatalog` to represent an earthquake catalog. The essential event data are stored in a +`structured NumPy array `_ with the following data type. :: + + dtype = numpy.dtype([('id', 'S256'), + ('origin_time', '`_ and make a pull request so +we can include this in the next release. + +Catalog as Pandas dataframes +============================ + +You might be comfortable using Pandas dataframes to manipulate tabular data. PyCSEP provides some routines for accessing +catalogs as a :class:`pandas.DataFrame`. You can use ``df = catalog.to_dataframe(with_datetimes=True)`` to return the +DataFrame representation of the catalog. Using the ``catalog = CSEPCatalog.from_dataframe(df)`` you can return back to the +PyCSEP data model. + +.. note:: + + Going between a DataFrame and CSEPCatalog is a lossy transformation. It essentially only retains the essential event + attributes that are defined by the ``dtype`` of the class. + +**************** +Loading catalogs +**************** + +Load catalogs from files +======================== + +You can easily load catalogs in the supported format above using :func:`csep.load_catalog`. This function provides a +top-level function to load catalogs that are currently supported by PyCSEP. You must specify the type of the catalog and +the format you want it to be loaded. The type of the catalog can be: :: + + catalog_type = ('ucerf3', 'csep-csv', 'zmap', 'jma-csv', 'ndk') + catalog_format = ('csep', 'native') + +The catalog type determines which reader :mod:`csep.utils.readers` will be used to load in the file. The default is the +``csep-csv`` type and the ``native`` format. The ``jma-csv`` format can be created using the ``./bin/deck2csv.pl`` +Perl script. + +.. note:: + The format is important for ``ucerf3`` catalogs, because those are stored as big endian binary numbers by default. + If you are working with ``ucerf3-etas`` catalogs and would like to convert them into the CSEPCatalog format you can + use the ``format='csep'`` option when loading in a catalog or catalogs. + +Load catalogs from ComCat +========================= + +PyCSEP provides top-level functions to load catalogs using ComCat. We incorporated the work done by Mike Hearne and +others from the U.S. Geological Survey into PyCSEP in an effort to reduce the dependencies of this project. The top-level +access to ComCat catalogs can be accessed from :func:`csep.query_comcat`. Some lower level functionality can be accessed +through the :mod:`csep.utils.comcat` module. All credit for this code goes to the U.S. Geological Survey. + +:ref:`Here` is complete example of accessing the ComCat catalog. + +Writing custom loader functions +=============================== + +You can easily add custom loader functions to import data formats that are not currently included with the PyCSEP tools. +Both :meth:`csep.core.catalogs.CSEPCatalog.load_catalog` and :func:`csep.load_catalog` support an optional argument +called ``loader`` to support these custom data formats. + +In the simplest form the function should have the following stub: :: + + def my_custom_loader_function(filename): + """ Custom loader function for catalog data. + + Args: + filename (str): path to the file containing the path to the forecast + + Returns: + eventlist: iterable of event data with the order: + (event_id, origin_time, latitude, longitude, depth, magnitude) + """ + + # imagine there is some logic to read in data from filename + + return eventlist + +This function can then be passed to :func:`csep.load_catalog` or :meth:`CSEPCatalog.load_catalog` +with the ``loader`` keyword argument. The function should be passed as a first-class object like this: :: + + import csep + my_custom_catalog = csep.load_catalog(filename, loader=my_custom_loader_function) + +.. note:: + The origin_time is actually an integer time. We recommend to parse the timing information as a + :class:`datetime.datetime` object and use the :func:`datetime_to_utc_epoch` + function to convert this to an integer time. + +Notice, we did not actually call the function but we just passed it as a reference. These functions can also access +web-based catalogs like we implement with the :func:`csep.query_comcat` function. This function doesn't work with either +:func:`csep.load_catalog` or :meth:`CSEPCatalog.load_catalog`, +because these are intended for file-based catalogs. Instead, we can create the catalog object directly. +We would do that like this :: + + def my_custom_web_loader(...): + """ Accesses catalog from online data source. + + There are no requirements on the arguments if you are creating the catalog directly from the class. + + Returns: + eventlist: iterable of event data with the order: + (event_id, origin_time, latitude, longitude, depth, magnitude) + """ + + # custom logic to access online catalog repository + + return eventlist + +As you might notice, all loader functions are required to return an event-list. This event-list must be iterable and +contain the required event data. + +.. note:: + The events in the eventlist should follow the form :: + + eventlist = my_custom_loader_function(...) + + event = eventlist[0] + + event[0] = event_id + # see note above about using integer times + event[1] = origin_time + event[2] = latitude + event[3] = longitude + event[4] = depth + event[5] = magnitude + + +Once you have a function that returns an eventlist, you can create the catalog object directly. This uses the +:class:`csep.core.catalogs.CSEPCatalog` as an example. :: + + import csep + + eventlist = my_custom_web_loader(...) + catalog = csep.catalogs.CSEPCatalog(data=eventlist, **kwargs) + +The **kwargs represents any other keyword argument that can be passed to +:class:`CSEPCatalog`. This could be the ``catalog_id`` or the +:class:`CartesianGrid2D`. + +Including custom event metadata +=============================== + +Catalogs can include additional metadata associated with each event. Right now, there are no direct applications for +event metadata. Nonetheless, it can be included with a catalog object. + +The event metadata should be a dictionary where the keys are the ``event_id`` of the individual events. For example, :: + + event_id = 'my_dummy_id' + metadata_dict = catalog.metadata[event_id] + +Each event meta_data should be a JSON-serializable dictionary or a class that implements the to_dict() and from_dict() methods. This is +required to properly save the catalog files into JSON format and verify whether two catalogs are the same. You can see +the :meth:`to_dict` and :meth:`from_dict` methods for an +example of how these would work. + +*************************** +Accessing Event Information +*************************** + +In order to utilize the low-level acceleration from Numpy, most catalog operations are vectorized. The catalog classes +provide some getter methods to access the essential catalog data. These return arrays of :class:`numpy.ndarray` with the +``dtype`` defined by the class. + +.. automodule:: csep.core.catalogs + +The following functions return :class:`numpy.ndarrays` of the catalog information. + +.. autosummary:: + + CSEPCatalog.event_count + CSEPCatalog.get_magnitudes + CSEPCatalog.get_longitudes + CSEPCatalog.get_latitudes + CSEPCatalog.get_depths + CSEPCatalog.get_epoch_times + CSEPCatalog.get_datetimes + CSEPCatalog.get_cumulative_number_of_events + +The catalog data can be iterated through event-by-event using a standard for-loop. For example, we can do something +like :: + + for event in catalog.data: + print( + event['id'], + event['origin_time'], + event['latitude'], + event['longitude'], + event['depth'], + event['magnitude'] + ) + +The keyword for the event tuple are defined by the ``dtype`` of the class. The keywords for +:class:`CSEPCatalog` are shown in the snippet directly above. For example, a quick and +dirty plot of the cumulative events over time can be made using the :mod:`matplotlib.pyplot` interface :: + + import csep + import matplotlib.pyplot as plt + + # lets assume we already loaded in some catalog + catalog = csep.load_catalog("my_catalog_path.csv") + + # quick and dirty plot + fig, ax = plt.subplots() + ax.plot(catalog.get_epoch_times(), catalog.get_cumulative_number_of_events()) + plt.show() + +**************** +Filtering events +**************** + +Most of the catalog files (or catalogs accessed via the web) contain more events that are desired for a given use case. +PyCSEP provides a few routines to help filter events out of the catalog. The following methods help to filter out +unwanted events from the catalog. + +.. autosummary:: + + CSEPCatalog.filter + CSEPCatalog.filter_spatial + CSEPCatalog.apply_mct + +Filtering events by attribute +============================= + +The function :meth:`CSEPCatalog.filter` provides the ability to filter events +based on their essential attributes. This function works by parsing filtering strings and applying them using a logical +`and` operation. The catalog strings have the following format ``filter_string = f"{attribute} {operator} {value}"``. The +filter strings represent a statement that would evaluate as `True` after they are applied. For example, the statement +``catalog.filter('magnitude >= 2.5')`` would retain all events in the catalog greater-than-or-equal to magnitude 2.5. + +The attributes are determined by the dtype of the catalog, therefore you can filter based on the ``(origin_time, latitude, +longitude, depth, and magnitude).`` Additionally, you can use the attribute ``datetime`` and provide a :class:`datetime.datetime` +object to filter events using the data type. + +The filter function can accept a string or a list of filter statements. If the function is called without any arguments +the function looks to use the ``catalog.filters`` member. This can be provided during class instantiation or bound +to the class afterward. :ref:`Here` is complete example of how to filter a catalog +using the filtering strings. + +Filtering events in space +========================= + +You might want to supply a non-rectangular polygon that can be used to filter events in space. This is commonly done +to prepare an observed catalog for forecast evaluation. Right now, this can be accomplished by supplying a +:class:`region` to the catalog or +:meth:`filter_spatial`. There will be more information about using regions +in the :ref:`user-guide page` page. The :ref:`catalog filtering` contains +a complete example of how to filter a catalog using a user defined aftershock region based on the M7.1 Ridgecrest +mainshock. + +Time-dependent magnitude of completeness +======================================== + +Seismic networks have difficulty recording events immediately after a large event occurs, because the passing seismic waves +from the larger event become mixed with any potential smaller events. Usually when we evaluate an aftershock forecast, we should +account for this time-dependent magnitude of completeness. PyCSEP provides the +:ref:`Helmstetter et al., [2006]` implementation of the time-dependent magnitude completeness model. + +This requires information about an event which can be supplied directly to :meth:`apply_mct`. +Additionally, PyCSEP provides access to the ComCat API using :func:`get_event_by_id`. +An exmaple of this can be seen in the :ref:`filtering catalog tutorial`. + + +************** +Binning Events +************** + +Another common task requires binning earthquakes by their spatial locations and magnitudes. This is routinely done when +evaluating earthquake forecasts. Like filtering a catalog in space, you need to provide some information about the region +that will be used for the binning. Please see the :ref:`user-guide page` for more information about +regions. + +.. note:: + We would like to make this functionality more user friendly. If you have suggestions or struggles, please open an issue + on the GitHub page and we'd be happy to incorporate these ideas into the toolkit. + +The following functions allow binning of catalogs using space-magnitude regions. + +.. autosummary:: + + CSEPCatalog.spatial_counts + CSEPCatalog.magnitude_counts + CSEPCatalog.spatial_magnitude_counts + +These functions return :class:`numpy.ndarrays` containing the count of the events determined from the +catalogs. This example shows how to obtain magnitude counts from a catalog. +The index of the ndarray corresponds to the index of the associated space-magnitude region. For example, :: + + import csep + import numpy + + catalog = csep.load_catalog("my_catalog_file") + + # returns bin edges [2.5, 2.6, ... , 7.5] + bin_edges = numpy.arange(2.5, 7.55, 0.1) + + magnitude_counts = catalog.magnitude_counts(mag_bins=bin_edges) + +In this example, ``magnitude_counts[0]`` is the number of events with 2.5 ≤ M < 2.6. All of the magnitude binning assumes +that the final bin extends to infinity, therefore ``magnitude_counts[-1]`` contains the number of events with +7.5 ≤ M < ∞. \ No newline at end of file diff --git a/_sources/concepts/evaluations.rst.txt b/_sources/concepts/evaluations.rst.txt new file mode 100644 index 00000000..05e94ad3 --- /dev/null +++ b/_sources/concepts/evaluations.rst.txt @@ -0,0 +1,271 @@ +.. _evaluation-reference: + +.. automodule:: csep.core.poisson_evaluations + +########### +Evaluations +########### + +PyCSEP provides routines to evaluate both gridded and catalog-based earthquake forecasts. This page explains how to use +the forecast evaluation routines and also how to build "mock" forecast and catalog classes to accommodate different +custom forecasts and catalogs. + +.. contents:: Table of Contents + :local: + :depth: 2 + +.. :currentmodule:: csep + +**************************** +Gridded-forecast evaluations +**************************** + +Grid-based earthquake forecasts assume earthquakes occur in discrete space-time-magnitude bins and their rate-of- +occurrence can be defined using a single number in each magnitude bin. Each space-time-magnitude bin is assumed to be +an independent Poisson random variable. Therefore, we use likelihood-based evaluation metrics to compare these +forecasts against observations. + +PyCSEP provides two groups of evaluation metrics for grid-based earthquake forecasts. The first are known as +consistency tests and they verify whether a forecast in consistent with an observation. The second are comparative tests +that can be used to compare the performance of two (or more) competing forecasts. +PyCSEP implements the following evaluation routines for grid-based forecasts. These functions are intended to work with +:class:`GriddedForecasts` and :class:`CSEPCatalogs``. +Visit the :ref:`catalogs reference` and the :ref:`forecasts reference` to learn +more about to import your forecasts and catalogs into PyCSEP. + +.. note:: + Grid-based forecast evaluations act directly on the forecasts and catalogs as they are supplied to the function. + Any filtering of catalogs and/or scaling of forecasts must be done before calling the function. + This must be done before evaluating the forecast and should be done consistently between all forecasts that are being + compared. + +See the :ref:`example` for gridded forecast evaluation for an end-to-end walkthrough on how +to evaluate a gridded earthquake forecast. + + +Consistency tests +================= + +.. autosummary:: + + number_test + magnitude_test + spatial_test + likelihood_test + conditional_likelihood_test + + +Comparative tests +================= + +.. autosummary:: + + paired_t_test + w_test + +Publication references +====================== + +1. Number test (:ref:`Schorlemmer et al., 2007`; :ref:`Zechar et al., 2010`) +2. Magnitude test (:ref:`Zechar et al., 2010`) +3. Spatial test (:ref:`Zechar et al., 2010`) +4. Likelihood test (:ref:`Schorlemmer et al., 2007`; :ref:`Zechar et al., 2010`) +5. Conditional likelihood test (:ref:`Werner et al., 2011`) +6. Paired t test (:ref:`Rhoades et al., 2011`) +7. Wilcoxon signed-rank test (:ref:`Rhoades et al., 2011`) + +********************************** +Catalog-based forecast evaluations +********************************** + +Catalog-based forecasts are issued as a family of stochastic event sets (synthetic earthquake catalogs) and can express +the full uncertainty of the forecasting model. Additionally, these forecasts retain the inter-event dependencies that +are lost when using discrete space-time-magnitude grids. This problem can impact the evaluation performance of +time-dependent forecasts like the epidemic type aftershock sequence model (ETAS). + +In order to support generative or simulator-based models, we define a suite of consistency tests that compare forecasted +distributions against observations without the use of a parametric likelihood function. These evaluations take advantage +of the fact that the forecast and the observations are both earthquake catalogs. Therefore, we can compute identical +statistics from these catalogs and compare them against one another. + +We provide four statistics that probe fundamental aspects of the earthquake forecasts. Please see +:ref:`Savran et al., 2020` for a complete description of the individual tests. For the implementation +details please follow the links below and see the :ref:`example` for catalog-based +forecast evaluation for an end-to-end walk through. + +.. automodule:: csep.core.catalog_evaluations + +Consistency tests +================= + +.. autosummary:: + + number_test + spatial_test + magnitude_test + pseudolikelihood_test + calibration_test + +Publication reference +===================== + +1. Number test (:ref:`Savran et al., 2020`) +2. Spatial test (:ref:`Savran et al., 2020`) +3. Magnitude test (:ref:`Savran et al., 2020`) +4. Pseudolikelihood test (:ref:`Savran et al., 2020`) +5. Calibration test (:ref:`Savran et al., 2020`) + +**************************** +Preparing evaluation catalog +**************************** + +The evaluations in PyCSEP do not implicitly filter the observed catalogs or modify the forecast data when called. For most +cases, the observation catalog should be filtered according to: + 1. Magnitude range of the forecast + 2. Spatial region of the forecast + 3. Start and end-time of the forecast + +Once the observed catalog is filtered so it is consistent in space, time, and magnitude as the forecast, it can be used +to evaluate a forecast. A single evaluation catalog can be used to evaluate multiple forecasts so long as they all cover +the same space, time, and magnitude region. + +********************* +Building mock classes +********************* + +Python is a duck-typed language which means that it doesn't care what the object type is only that it has the methods or +functions that are expected when that object is used. This can come in handy if you want to use the evaluation methods, but +do not have a forecast that completely fits with the forecast classes (or catalog classes) provided by PyCSEP. + +.. note:: + Something about great power and great responsibility... For the most reliable results, write a loader function that + can ingest your forecast into the model provided by PyCSEP. Mock-classes can work, but should only be used in certain + circumstances. In particular, they are very useful for writing software tests or to prototype features that can + be added into the package. + +This section will walk you through how to compare two forecasts using the :func:`paired_t_test` +with mock forecast and catalog classes. This sounds much more complex than it really is, and it gives you the flexibility +to use your own formats and interact with the tools provided by PyCSEP. + +.. warning:: + + The simulation-based Poisson tests (magnitude_test, likelihood_test, conditional_likelihood_test, and spatial_test) + are optimized to work with forecasts that contain equal-sized spatial bins. If your forecast uses variable sized spatial + bins you will get incorrect results. If you are working with forecasts that have variable spatial bins, create an + issue on GitHub because we'd like to implement this feature into the toolkit and we'd love your help. + +If we look at the :func:`paired_t_test` we see that it has the following code :: + + def paired_t_test(gridded_forecast1, gridded_forecast2, observed_catalog, alpha=0.05, scale=False): + """ Computes the t-test for gridded earthquake forecasts. + + Args: + gridded_forecast_1 (csep.core.forecasts.GriddedForecast): nd-array storing gridded rates, axis=-1 should be the magnitude column + gridded_forecast_2 (csep.core.forecasts.GriddedForecast): nd-array storing gridded rates, axis=-1 should be the magnitude column + observed_catalog (csep.core.catalogs.AbstractBaseCatalog): number of observed earthquakes, should be whole number and >= zero. + alpha (float): tolerance level for the type-i error rate of the statistical test + scale (bool): if true, scale forecasted rates down to a single day + + Returns: + evaluation_result: csep.core.evaluations.EvaluationResult + """ + + # needs some pre-processing to put the forecasts in the context that is required for the t-test. this is different + # for cumulative forecasts (eg, multiple time-horizons) and static file-based forecasts. + target_event_rate_forecast1, n_fore1 = gridded_forecast1.target_event_rates(observed_catalog, scale=scale) + target_event_rate_forecast2, n_fore2 = gridded_forecast2.target_event_rates(observed_catalog, scale=scale) + + # call the primative version operating on ndarray + out = _t_test_ndarray(target_event_rate_forecast1, target_event_rate_forecast2, observed_catalog.event_count, n_fore1, n_fore2, + alpha=alpha) + + # prepare evaluation result object + result = EvaluationResult() + result.name = 'Paired T-Test' + result.test_distribution = (out['ig_lower'], out['ig_upper']) + result.observed_statistic = out['information_gain'] + result.quantile = (out['t_statistic'], out['t_critical']) + result.sim_name = (gridded_forecast1.name, gridded_forecast2.name) + result.obs_name = observed_catalog.name + result.status = 'normal' + result.min_mw = numpy.min(gridded_forecast1.magnitudes) + +Notice that the function expects two forecast objects and one catalog object. The ``paired_t_test`` function calls a +method on the forecast objects named :meth:`target_event_rates` +that returns a tuple (:class:`numpy.ndarray`, float) consisting of the target event rates and the expected number of events +from the forecast. + +.. note:: + The target event rate is the expected rate for an observed event in the observed catalog assuming that + the forecast is true. For a simple example, if we forecast a rate of 0.3 events per year in some bin of a forecast, + each event that occurs within that bin has a target event rate of 0.3 events per year. The expected number of events + in the forecast can be determined by summing over all bins in the gridded forecast. + +We can also see that the ``paired_t_test`` function uses the ``gridded_forecast1.name`` and calls the :func:`numpy.min` +on the ``gridded_forecast1.magnitudes``. Using this information, we can create a mock-class that implements these methods +that can be used by this function. + +.. warning:: + If you are creating mock-classes to use with evaluation functions, make sure that you visit the corresponding + documentation and source-code to make sure that your methods return values that are expected by the function. In + this case, it expects the tuple (target_event_rates, expected_forecast_count). This will not always be the case. + If you need help, please create an issue on the GitHub page. + +Here we show an implementation of a mock forecast class that can work with the +:func:`paired_t_test` function. :: + + class MockForecast: + + def __init__(self, data=None, name='my-mock-forecast', magnitudes=(4.95)): + + # data is not necessary, but might be helpful for implementing target_event_rates(...) + self.data = data + self.name = name + # this should be an array or list. it can be as simple as the default argument. + self.magnitudes = magnitudes + + def target_event_rates(catalog, scale=None): + """ Notice we added the dummy argument scale. This function stub should match what is called paired_t_test """ + + # whatever custom logic you need to return these target event rates given your catalog can go here + # of course, this should work with whatever catalog you decide to pass into this function + + # this returns the tuple that paired_t_test expects + return (ndarray_of_target_event_rates, expected_number_of_events) + +You'll notice that :func:`paired_t_test` expects a catalog class. Looking back +at the function definition we can see that it needs ``observed_catalog.event_count`` and ``observed_catalog.name``. Therefore +the mock class for the catalog would look something like this :: + + class MockCatalog: + + def __init__(self, event_count, data=None, name='my-mock-catalog'): + + # this is not necessary, but adding data might be helpful for implementing the + # logic needed for the target_event_rates(...) function in the MockForecast class. + self.data = data + self.name = name + self.event_count = event_count + + +Now using these two objects you can call the :func:`paired_t_test` directly +without having to modify any of the source code. :: + + # create your forecasts + mock_forecast_1 = MockForecast(some_forecast_data1) + mock_forecast_2 = MockForecast(some_forecast_data2) + + # lets assume that catalog_data is an array that contains the catalog data + catalog = MockCatalog(len(catalog_data)) + + # call the function using your classes + eval_result = paired_t_test(mock_forecast_1, mock_forecast_2, catalog) + +The only requirement for this approach is that you implement the methods on the class that the calling function expects. +You can add anything else that you need in order to make those functions work properly. This example is about +as simple as it gets. + +.. note:: + + If you want to use mock-forecasts and mock-catalogs for other evaluations. You can just add the additional methods + that are needed onto the mock classes you have already built. diff --git a/_sources/concepts/forecasts.rst.txt b/_sources/concepts/forecasts.rst.txt new file mode 100644 index 00000000..dd4076d6 --- /dev/null +++ b/_sources/concepts/forecasts.rst.txt @@ -0,0 +1,193 @@ +.. _forecast-reference: + +######### +Forecasts +######### + +pyCSEP supports two types of earthquake forecasts that can be evaluated using the tools provided in this package. + +1. Grid-based forecasts +2. Catalog-based forecasts + +These forecast types and the pyCSEP objects used to represent them will be explained in detail in this document. + +.. contents:: Table of Contents + :local: + :depth: 2 + +***************** +Gridded forecasts +***************** + +Grid-based forecasts assume that earthquakes occur in independent and discrete space-time-magnitude bins. The occurrence +of these earthquakes are described only by their expected rates. This forecast format provides a general representation +of seismicity that can accommodate forecasts without explicit likelihood functions, such as those created using smoothed +seismicity models. Gridded forecasts can also be produced using simulation-based approaches like +epidemic-type aftershock sequence models. + +Currently, pyCSEP offers support for two types of grid-baesd forecasts, i.e. conventional gridded forecasts and quadtree-based gridded forecasts. +Conventional grid-based forecasts define their spatial component using a 2D Cartesian (rectangular) grid, and +their magnitude bins using a 1D Cartesian (rectangular) grid. The last bin (largest magnitude) bin is assumed to +continue until infinity. Forecasts use latitude and longitude to define the bin edge of the spatial grid. Typical values +for the are 0.1° x 0.1° (lat x lon) and 0.1 ΔMw units. These choices are not strictly enforced and can defined +according the specifications of an experiment. + +pyCSEP aso offers support to handle forecast using quadtree approach. Single or multi-resolution spatial grid can be generated based on the choice of modelers. +Then that grid can used for generating earthquake forecast. + + +Working with conventional gridded forecasts +########################################### + +PyCSEP provides the :class:`GriddedForecast` class to handle working with +grid-based forecasts. Please see visit :ref:`this example` for an end-to-end tutorial on +how to evaluate a grid-based earthquake forecast. + +.. autosummary:: csep.core.forecasts.GriddedForecast + +Default file format +-------------------- + +The default file format of a gridded-forecast is a tab delimited ASCII file with the following columns +(names are not included): :: + + LON_0 LON_1 LAT_0 LAT_1 DEPTH_0 DEPTH_1 MAG_0 MAG_1 RATE FLAG + -125.4 -125.3 40.1 40.2 0.0 30.0 4.95 5.05 5.8499099999999998e-04 1 + +Each row represents a single space-magnitude bin and the entire forecast file contains the rate for a specified +time-horizon. An example of a gridded forecast for the RELM testing region can be found +`here `_. + + +The coordinates (LON, LAT, DEPTH, MAG) describe the independent space-magnitude region of the forecast. The lower +coordinates are inclusive and the upper coordinates are exclusive. Rates are incremental within the magnitude range +defined by [MAG_0, MAG_1). The FLAG is a legacy value from CSEP testing centers that indicates whether a spatial cell should +be considered by the forecast. Currently, the implementation does not allow for individual space-magnitude cells to be +flagged. Thus, if a spatial cell is flagged then all corresponding magnitude cells are flagged. + +.. note:: + PyCSEP only supports regions that have a thickness of one layer. In the future, we plan to support more complex regions + including those that are defined using multiple depth regions. Multiple depth layers can be collapsed into a single + layer by summing. This operations does reduce the resolution of the forecast. + +Custom file format +------------------ + +The :meth:`GriddedForecast.from_custom` method allows you to provide +a function that can read custom formats. This can be helpful, because writing this function might be required to convert +the forecast into the appropriate format in the first place. This function has no requirements except that it returns the +expected data. + +.. automethod:: csep.core.forecasts.GriddedForecast.from_custom + + +Working with quadtree-gridded forecasts +############################################## + +The same forecast :class:`GriddedForecast` class also handles forecasts with +quadtree grids. Please see visit :ref:`this example` for an end-to-end tutorial on +how to evaluate a grid-based earthquake forecast. + +.. autosummary:: csep.core.forecasts.GriddedForecast + +Default file format +-------------------- + +The default file format of a quadtree gridded-forecast is also a tab delimited ASCII file with the following columns. Just one additional column is added to the file format, i.e. quadkey to identify the spatial cells. +If quadkeys for each spatial cell are known, it is enough to compute lon/lat bounds. However, lon/lat bounds are still kept in the default format to make it look consistent with conventional forecast format. + +(names are not included): :: + + QUADKEY LON_0 LON_1 LAT_0 LAT_1 DEPTH_0 DEPTH_1 MAG_0 MAG_1 RATE FLAG + '01001' -125.4 -125.3 40.1 40.2 0.0 30.0 4.95 5.05 5.8499099999999998e-04 1 + +Each row represents a single space-magnitude bin and the entire forecast file contains the rate for a specified +time-horizon. + +The coordinates (LON, LAT, DEPTH, MAG) describe the independent space-magnitude region of the forecast. The lower +coordinates are inclusive and the upper coordinates are exclusive. Rates are incremental within the magnitude range +defined by [MAG_0, MAG_1). The FLAG is a legacy value from CSEP testing centers that indicates whether a spatial cell should +be considered by the forecast. Please note that flagged functionality is not yet included for quadtree-gridded forecasts. + +PyCSEP offers the :func:`load_quadtree_forecast` function to read quadtree forecast in default format. +Similarly, custom forecast can be defined and read into pyCSEP as explained for conventional gridded forecast. + + +*********************** +Catalog-based forecasts +*********************** + +Catalog-based earthquake forecasts are issued as collections of synthetic earthquake catalogs. Every synthetic catalog +represents a realization of the forecast that is representative the uncertainty present in the model that generated +the forecast. Unlike grid-based forecasts, catalog-based forecasts retain the space-magnitude dependency of the events +they are trying to model. A grid-based forecast can be easily computed from a catalog-based forecast by assuming a +space-magnitude region and counting events within each bin from each catalog in the forecast. There can be issues with +under sampling, especially for larger magnitude events. + +Working with catalog-based forecasts +#################################### + +.. autosummary:: csep.core.forecasts.CatalogForecast + +Please see visit :ref:`this` example for an end-to-end tutorial on how to evaluate a catalog-based +earthquake forecast. An example of a catalog-based forecast stored in the default pyCSEP format can be found +`here `_. + + +The standard format for catalog-based forecasts a comma separated value ASCII format. This format was chosen to be +human-readable and easy to implement in all programming languages. Information about the format is shown below. + +.. note:: + Custom formats can be supported by writing a custom function or sub-classing the + :ref:`AbstractBaseCatalog`. + +The event format matches the follow specfication: :: + + LON, LAT, MAG, ORIGIN_TIME, DEPTH, CATALOG_ID, EVENT_ID + -125.4, 40.1, 3.96, 1992-01-05T0:40:3.1, 8, 0, 0 + +Each row in the catalog corresponds to an event. The catalogs are expected to be placed into the same file and are +differentiated through their `catalog_id`. Catalogs with no events can be handled in a couple different ways intended to +save storage. + +The events within a catalog should be sorted in time, and the *catalog_id* should be increasing sequentially. Breaks in +the *catalog_id* are interpreted as missing catalogs. + +The following two examples show how you represent a forecast with 5 catalogs each containing zero events. + +**1. Including all events (verbose)** :: + + LON, LAT, MAG, ORIGIN_TIME, DEPTH, CATALOG_ID, EVENT_ID + ,,,,,0, + ,,,,,1, + ,,,,,2, + ,,,,,3, + ,,,,,4, + +**2. Short-hand** :: + + LON, LAT, MAG, ORIGIN_TIME, DEPTH, CATALOG_ID, EVENT_ID + ,,,,,4, + +The following three example show how you could represent a forecast with 5 catalogs. Four of the catalogs contain zero events +and one catalog contains one event. + +**3. Including all events (verbose)** :: + + LON, LAT, MAG, ORIGIN_TIME, DEPTH, CATALOG_ID, EVENT_ID + ,,,,,0, + ,,,,,1, + ,,,,,2, + ,,,,,3, + -125.4, 40.1, 3.96, 1992-01-05T0:40:3.1, 8, 4, 0 + +**4. Short-hand** :: + + LON, LAT, MAG, ORIGIN_TIME, DEPTH, CATALOG_ID, EVENT_ID + -125.4, 40.1, 3.96, 1992-01-05T0:40:3.1, 8, 4, 0 + +The simplest way to orient the file follow (3) in the case where some catalogs contain zero events. The zero oriented +catalog_id should be assigned to correspond with the total number of catalogs in the forecast. In the case where every catalog +contains zero forecasted events, you would specify the forecasting using (2). The *catalog_id* should be assigned to +correspond with the total number of catalogs in the forecast. + diff --git a/_sources/concepts/plots.rst.txt b/_sources/concepts/plots.rst.txt new file mode 100644 index 00000000..321b5e03 --- /dev/null +++ b/_sources/concepts/plots.rst.txt @@ -0,0 +1,27 @@ +.. _plots-reference: + +##### +Plots +##### + +PyCSEP provides several functions to produce commonly used plots, such as an earthquake forecast or the evaluation catalog +or perhaps a combination of the two. + +.. contents:: Table of Contents + :local: + :depth: 2 + +************ +Introduction +************ + + + +************** +Plot arguments +************** + +*************** +Available plots +*************** + diff --git a/_sources/concepts/regions.rst.txt b/_sources/concepts/regions.rst.txt new file mode 100644 index 00000000..9a26fb12 --- /dev/null +++ b/_sources/concepts/regions.rst.txt @@ -0,0 +1,240 @@ +.. _regions-reference + +####### +Regions +####### + +.. automodule:: csep.utils.basic_types + +PyCSEP includes commonly used CSEP testing regions and classes that facilitate working with gridded data sets. This +module is early in development and will be a focus of future development. + +.. contents:: Table of Contents + :local: + :depth: 2 + +.. :currentmodule:: csep + +.. automodule:: csep.core.regions + +Practically speaking, earthquake forecasts, especially time-dependent forecasts, treat time differently than space and +magnitude. If we consider a family of monthly forecasts for the state of California for earthquakes with **M** 3.95+, +each of these forecasts would use the same space-magnitude region, even though the time periods are +different. Because the time horizon is an implicit property of the forecast, we do not explicitly consider time in the region +objects provided by pyCSEP. This module contains tools for working with gridded regions in both space and magnitude. + +First, we will describe how the spatial regions are handled. Followed by magnitude regions, and how these two aspects +interact with one another. + +.. ************** +.. Region objects +.. ************** + +Currently, pyCSEP provides two different kinds of spatial gridding approaches to handle binning catalogs and defining regions +for earthquake forecasting evaluations, i.e. :class:`CartesianGrid2D` and :class:`QuadtreeGrid2D`. +The fruther details about spatial grids are given below. + +************** +Cartesian grid +************** + +This section contains information about using 2D cartesian grids. + +.. autosummary:: + + CartesianGrid2D + +.. note:: + We are planning to do some improvements to this module and to expand its capabilities. For example, we would like to + handle non-regular grids such as a quad-tree. Also, a single Polygon should be able to act as the spatial component + of the region. These additions will make this toolkit more useful for crafting bespoke experiments and for general + catalog analysis. Feature requests are always welcome! + +The :class:`CartesianGrid2D` acts as a data structure that can associate a spatial +location (eg., lon and lat) with its corresponding spatial bin. This class is optimized to work with regular grids, +although they do not have to be complete (they can have holes) and they do not have to be rectangular (each row / column +can have a different starting coordinate). + +The :class:`CartesianGrid2D` maintains a list of +:class:`Polygon` objects that represent the individual spatial bins from the overall +region. The origin of each polygon is considered to be the lower-left corner (the minimum latitude and minimum longitude). + +.. autosummary:: + + CartesianGrid2D.num_nodes + CartesianGrid2D.get_index_of + CartesianGrid2D.get_location_of + CartesianGrid2D.get_masked + CartesianGrid2D.get_cartesian + CartesianGrid2D.get_bbox + CartesianGrid2D.midpoints + CartesianGrid2D.origins + CartesianGrid2D.from_origins + + +Creating spatial regions +######################## + +Here, we describe how the class works starting with the class constructors. :: + + @classmethod + def from_origins(cls, origins, dh=None, magnitudes=None, name=None): + """ Convenience function to create CartesianGrid2D from list of polygon origins """ + +For most applications, using the :meth:`from_origins` function will be +the easiest way to create a new spatial region. The method accepts a 2D :class:`numpy.ndarray` containing the x (lon) and y (lat) +origins of the spatial bin polygons. These should be the complete set of origins. The function will attempt to compute the +grid spacing by comparing the x and y values between adjacent origins. If this does not seem like a reliable approach +for your region, you can explicitly provide the grid spacing (dh) to this method. + +When a :class:`CartesianGrid2D` is created the following steps occur: + + 1. Compute the bounding box containing all polygons (2D array) + 2. Create a map between the index of the 2D bounding box and the list of polygons of the region. + 3. Store a boolean flag indicating whether a given cell in the 2D array is valid or not + +Once these mapping have been created, we can now associate an arbitrary (lon, lat) point with a spatial cell using the +mapping defined in (2). The :meth:`get_index_of` accepts a list +of longitudes and latitudes and returns the index of the polygon they are associated with. For instance, this index can +now be used to access a data value stored in another data structure. + +.. *************** +Testing Regions +######################## + +CSEP has defined testing regions that can be used for earthquake forecasting experiments. The following functions in the +:mod:`csep.core.regions` module returns a :class:`CartesianGrid2D` consistent with +these regions. + +.. autosummary:: + + california_relm_region + italy_csep_region + global_region + +.. **************** +Region Utilities +######################## + +PyCSEP also provides some utilities that can facilitate working with regions. As we expand this module, we will include +functions to accommodate different use-cases. + +.. autosummary:: + + magnitude_bins + create_space_magnitude_region + parse_csep_template + increase_grid_resolution + masked_region + generate_aftershock_region + + +************** +Quadtree grid +************** + +We want to use gridded regions with less spatial cells and multi-resolutions grids for creating earthquake forecast models. +We also want to test forecast models on different resolutions. But before we can do this, we need to have the capability to acquire such grids. +There can be different possible options for creating multi-resolutions grids, such as voronoi cells or coarse grids, etc. +The gridding approach needs to have certain properties before we choose it for CSEP experiments. We want an approach for gridding that is simple to implement, easy to understand and should come with intutive indexing. Most importantly, it should come with a coordinated mechanism for changing between different resolutions. It means that one can not simply choose to combine cells of its own choice and create a larger grid cell (low-resolution) and vice versa. This can potentially make the grid comparision process difficult. There must be a specific well-defined strategy to change between different resolutions of grids. We explored different gridding approaches and found quadtree to be a better solution for this task, despite a few drawbacks, such as quadtree does not work for global region beyond 85.05 degrees North and South. + +The quadtree is a hierarchical tiling strategy for storing and indexing geospatial data. In start the global testing region is divided into 4 tiles, identified as '0', '1', '2', '3'. +Then each tile can be divided into four children tiles, until a final desired grid is acquired. +Each tile is identified by a unique identifier called quadkey, which are '0', '1', '2' or '3'. When a tile is divided further, the quadkey is also modified by appending the new identifier with the previous quadkey. +Once a grid is acquired then we call each tile as a grid cell. +The number of times a tile is divided is reffered as the zoom-level (L) and the length of quadkey denotes the number of times a tile has been divided. +If a grid has same zoom-level for each tile, then it is referred as single-resolution grid. + + +A single-resolution grid is acquired at zoom-level L=5, provides 1024 spatial cells in whole globe. Increase in the value of L by one step leads to the increase in the number of grid cells by four times. +Similary at L=11, the number of cells acquired in grid are 4.2 million approximately. + +We can use quadtree in combination with any data to create a multi-resolution grid, in which the resolution is determined by the input data. In general quadtree can be used in combination with any type of input data. However, now we provide support of earthquake catalog to be used as input data for determining the grid-resolution. With time we intend to incorporate the support for other types of datasets, such as such as distance form the mainshock or rupture plance, etc. + +Currently, for generating mult-resolution grids, we can choose two criteria to decide resolution, i.e. maximum number of earthquakes allowed per cell (Nmax) and maximum zoom-level (L) allowed for a cell. +This means that only those cells (tiles) will be divided further into sub-cells that contain more earthquakes than Nmax and the cells will not be divided further after reaching L, even if number of earthquakes are more than Nmax. Thus, quadtree can provide high-resolution (smaller) grid cells in seismically active regions and a low-resolution (bigger) grid cells in seismically quiet regions. +It offers earthquake forecast modellers the liberty of choosing a suitable spatial grid based on their choice. + + +This section contains information about using quadtree based grid. + +.. autosummary:: + + QuadtreeGrid2D + + +The :class:`QuadtreeGrid2D` acts as a data structure that can associate a spatial +location, identified by a quadkey (or lon and lat) with its corresponding spatial bin. This class allows to create a quadtree grid using three different methods, based on the choice of user. +It also offers the conversion from quadtree cell to lon/lat bouds. + +The :class:`QuadtreeGrid2D` maintains a list of +:class:`Polygon` objects that represent the individual spatial bins from the overall +region. The origin of each polygon is considered to be the lower-left corner (the minimum latitude and minimum longitude). + +.. autosummary:: + + QuadtreeGrid2D.num_nodes + QuadtreeGrid2D.get_cell_area + QuadtreeGrid2D.get_index_of + QuadtreeGrid2D.get_location_of + QuadtreeGrid2D.get_bbox + QuadtreeGrid2D.midpoints + QuadtreeGrid2D.origins + QuadtreeGrid2D.save_quadtree + QuadtreeGrid2D.from_catalog + QuadtreeGrid2D.from_single_resolution + QuadtreeGrid2D.from_quadkeys + + +Creating spatial regions +######################## + +Here, we describe how the class works starting with the class constructors and how users can create different types regions. + +Multi-resolution grid based on earthquake catalog +----------------------------------------- +Read a global earthquake catalog in :class:`CSEPCatalog` format and use it to generate a multi-resolution quadtree-based grid constructors. :: + + @classmethod + def from_catalog(cls, catalog, threshold, zoom=11, magnitudes=None, name=None): + """ Convenience function to create a multi-resolution grid using earthquake catalog """ + +Single-resolution grid +----------------------------------------- +Generate a single-resolution grid at the same zoom-level everywhere. This grid does not require a catalog. It only needs the zoom-level to determine the resolution of grid.:: + + @classmethod + def from_single_resolution(cls, zoom, magnitudes=None, name=None): + """ Convenience function to create a single-resolution grid """ + +Grid loading from already created quadkeys +--------------------------------------- +An already saved quadtree grid can also be loaded in the pyCSEP. Read the quadkeys and use the following function to instantiate the class. :: + + @classmethod + def from_quadkeys(cls, quadk, magnitudes=None, name=None): + """ Convenience function to create a grid using already generated quadkeys """ + + +When a :class:`QuadtreeGrid2D` is created the following steps occur: + + 1. Compute the bounding box containing all polygons (2D array) corresponding to quadkeys + 2. Create a map between the index of the 2D bounding box and the list of polygons of the region. + + +Once these mapping have been created, we can now associate an arbitrary (lon, lat) point with a spatial cell using the +mapping defined in (2). The :meth:`get_index_of` accepts a list +of longitudes and latitudes and returns the index of the polygon they are associated with. For instance, this index can +now be used to access a data value stored in another data structure. + + +Testing Regions +######################## + +CSEP has defined testing regions that can be used for earthquake forecasting experiments. The above mentioned functions are used to create quadtree grids for global testing region. +Once a grid +However, a quadtree gridded region can be acquired for any geographical area and used for forecast generation and testing. For example, we have created a quadtree-gridded region at fixed zoom-level of 12 for California RELM testing region. + +.. autosummary:: + + california_quadtree_region diff --git a/_sources/getting_started/core_concepts.rst.txt b/_sources/getting_started/core_concepts.rst.txt new file mode 100644 index 00000000..5e59fc59 --- /dev/null +++ b/_sources/getting_started/core_concepts.rst.txt @@ -0,0 +1,96 @@ +=========================== +Core Concepts for Beginners +=========================== + +If you are reading this documentation, there is a good chance that you are developing/evaluating an earthquake forecast or +implementing an experiment at a CSEP testing center. This section will help you understand how we conceptualize forecasts, +evaluations, and earthquake catalogs. These components make up the majority of the PyCSEP package. We also include some +prewritten visualizations along with some utilities that might be useful in your work. + +Catalogs +======== +Earthquake catalogs are fundamental to both forecasts and evaluations and make up a core component of the PyCSEP package. +At some point you will be working with catalogs if you are evaluating earthquake forecasts. + +One major difference between PyCSEP and a project like `ObsPy `_ is that typical 'CSEP' calculations +operate on an entire catalog at once to perform methods like filtering and binning that are required to evaluate an earthquake +forecast. We provide earthquake catalog classes that follow the interface defined by +:class:`AbstractBaseCatalog `. + +The catalog data are stored internally as a `structured Numpy array `_ +which effectively treats events contiguously in memory like a c-style struct. This allows us to accelerate calculations +using the vectorized operations provided by Numpy. The necessary attributes for an event to be used +in an evaluation are the spatial location (lat, lon), magnitude, and origin time. Additionally, depth and other identifying +characteristics can be used. The `default storage format `_ for +an earthquake catalog is an ASCII/utf-8 text file with events stored in CSV format. + +The :class:`AbstractBaseCatalog ` can be extended to accommodate different catalog formats +or input and output routines. For example :class:`UCERF3Catalog ` extends this class to deal +with the big-endian storage routine from the `UCERF3-ETAS `_ forecasting model. More +information will be included in the :ref:`catalogs-reference` section of the documentation. + +Forecasts +========= + +PyCSEP provides objects for interacting with :ref:`earthquake forecasts `. PyCSEP supports two types +of earthquake forecasts, and provides separate objects for interacting with both. The forecasts share similar +characteristics, but, conceptually, they should be treated differently because they require different types of evaluations. + +Both time-independent and time-dependent forecasts are represented using the same PyCSEP forecast objects. Typically, for +time-dependent forecasts, one would create separate forecast objects for each time period. As the name suggests, +time-independent forecasts do not change with time. + + +Grid-based forecast +------------------- + +Grid-based earthquake forecasts are specified by the expected rate of earthquakes within discrete, independent +space-time-magnitude bins. Within each bin, the expected rate represents the parameter of a Poisson distribution. For, details +about the forecast objects visit the :ref:`forecast-reference` section of the documentation. + +The forecast object contains three main components: (1) the expected earthquake rates, (2) the +:class:`spatial region ` associated with the rates, and (3) the magnitude range +associated with the expected rates. The spatial bins are usually discretized according to the geographical coordinates +latitude and longitude with most previous CSEP spatial regions defining a spatial size of 0.1° x 0.1°. Magnitude bins are +also discretized similarly, with 0.1 magnitude units being a standard choice. PyCSEP does not enforce constraints on the +bin-sizes for both space and magnitude, but the discretion must be regular. + + +Catalog-based forecast +---------------------- + +Catalog-based forecasts are specified by families of synthetic earthquake catalogs that are generated through simulation +by probabilistic models. Each catalog represents a stochastic representation of seismicity consistent with the forecasting +model. Probabilistic statements are made by computing statistics (usually by counting) within the family of synthetic catalogs, +which can be as simple as counted the number of events in each catalog. These statistics represent the full-distribution of outcomes as +specified by the forecasting models, thereby allowing for more direct assessments of the models that produce them. + +Within PyCSEP catalog forecasts are effectively lists of earthquake catalogs, no different than those obtained from +authoritative sources. Thus, any operation that can be performed on an observed earthquake catalog can be performed on a +synthetic catalog from a catalog-based forecast. + +It can be useful to count the numbers of forecasted earthquakes within discrete space-time bins (like those used for +grid-based forecasts). Therefore, it's common to have a :class:`spatial region ` and +set of magnitude bins associated with a forecast. Again, the only rules that PyCSEP enforces are that the space-magnitude +regions are regularly discretized. + +Evaluations +=========== + +PyCSEP provides implementations of statistical tests used to evaluate both grid-based and catalog-based earthquake forecasts. +The former use parametric evaluations based on Poisson likelihood functions, while the latter use so-called 'likelihood-free' +evaluations that are computed from empirical distributions provided by the forecasts. Details on the specific implementation +of the evaluations will be provided in the :ref:`evaluation-reference` section. + +Every evaluation can be different, but in general, the evaluations need the following information: + +1. Earthquake forecast(s) + + * Spatial region + * Magnitude range + +2. Authoritative earthquake catalog + +PyCSEP does not produce earthquake forecasts, but provides the ability to represent them using internal data models to +facilitate their evaluation. General advice on how to administer the statistical tests will be provided in the +:ref:`evaluation-reference` section. \ No newline at end of file diff --git a/_sources/getting_started/installing.rst.txt b/_sources/getting_started/installing.rst.txt new file mode 100644 index 00000000..d9c0fd07 --- /dev/null +++ b/_sources/getting_started/installing.rst.txt @@ -0,0 +1,103 @@ +Installing pyCSEP +================= + +We are working on a ``conda-forge`` recipe and PyPI distribution. +If you plan on contributing to this package, visit the +`contribution guidelines `_ for installation instructions. + +.. note:: This package requires >=Python 3.9. + +The easiest way to install PyCSEP is using ``conda``. It can also be installed using ``pip`` or built from source. + +Using Conda +----------- +For most users, you can use :: + + conda install --channel conda-forge pycsep + +Using Pip +--------- + +Before this installation will work, you must **first** install the following system dependencies. The remaining dependencies +should be installed by the installation script. To help manage dependency issues, we recommend using virtual environments +like `virtualenv`. + +| Python 3.9 or later (https://python.org) +| +| NumPy 1.21.3 or later (https://numpy.org) +| Python package for scientific computing and numerical calculations. +| +| SciPy 1.7.1 or later (https://scipy.org) +| Python package that extends NumPy tools. +| +| Pandas 1.3.4 or later (https://pandas.pydata.org) +| Python package for data analysis and manipulation. +| +| Cartopy 0.22.0 or later (https://scitools.org.uk/cartopy/) +| Python package for geospatial data processing. + +Example for Ubuntu and MacOS: :: + + git clone https://github.com/sceccode/pycsep + pip install --upgrade pip + pip install -e . + +Installing from Source +---------------------- + +Use this approach if you want the most up-to-date code. This creates an editable installation that can be synced with +the latest GitHub commit. + +We recommend using virtual environments when installing python packages from source to avoid any dependency conflicts. We prefer +``conda`` as the package manager over ``pip``, because ``conda`` does a good job of handling binary distributions of packages +across multiple platforms. Also, we recommend using the ``miniconda`` or the ``miniforge`` (which uses mamba for a faster dependency handling) installers, because it is lightweight and only includes +necessary pacakages like ``pip`` and ``zlib``. + +Using Conda +*********** + +If you don't have ``conda`` on your machine, download and install `Miniconda `_ or `Miniforge `_ :: + + git clone https://github.com/SCECcode/pycsep + cd pycsep + conda env create -f requirements.yml + conda activate csep-dev + # Installs in editor mode with all dependencies + pip install -e . + +Note: If you want to go back to your default environment use the command ``conda deactivate``. + +Using Pip / Virtualenv +********************** + +We highly recommend using Conda, because this tools helps to manage binary dependencies on Python packages. If you +must use `Virtualenv `_ +follow these instructions: :: + + git clone https://github.com/SCECcode/pycsep + cd pycsep + python -m virtualenv venv + source venv/bin/activate + # Installs in editor mode dependencies are installed by conda + pip install -e .[all] + +Note: If you want to go back to your default environment use the command ``deactivate``. + +Developers Installation +----------------------- + +This shows you how to install a copy of the repository that you can use to create Pull Requests and sync with the upstream +repository. First, fork the repo on GitHub. It will now live at ``https://github.com//pycsep``. +We recommend using ``conda`` to install the development environment. :: + + git clone https://github.com//pycsep.git + cd pycsep + conda env create -f requirements.yml + conda activate csep-dev + pip install -e .[all] + # Allow sync with default repository + git remote add upstream https://github.com/SCECCode/pycsep.git + +This ensures to have a clean installation of ``pyCSEP`` and the required developer dependencies (e.g., ``pytest``, ``sphinx``). +Now you can pull from upstream using ``git pull upstream master`` to keep your copy of the repository in sync with the +latest commits. \ No newline at end of file diff --git a/_sources/getting_started/theory.rst.txt b/_sources/getting_started/theory.rst.txt new file mode 100644 index 00000000..4c9c8ddd --- /dev/null +++ b/_sources/getting_started/theory.rst.txt @@ -0,0 +1,1095 @@ +Theory of CSEP Tests +==================== + +This page describes the theory of each of the forecast tests +included in pyCSEP along with working code examples. You will find +information on the goals of each test, the theory behind the tests, how +the tests are applied in practice, and how forecasts are ‘scored’ given +the test results. Also, we include the code required to run in each test +and a description of how to interpret the test results. + +.. code:: ipython3 + + import csep + from csep.core import ( + regions, + catalog_evaluations, + poisson_evaluations as poisson + ) + from csep.utils import ( + datasets, + time_utils, + comcat, + plots, + readers + ) + + # Filters matplotlib warnings + import warnings + warnings.filterwarnings('ignore') + +Grid-based Forecast Tests +------------------------- + +These tests are designed for grid-based forecasts (e.g., Schorlemmer et +al., 2007), where expected rates are provided in discrete Poisson +space-magnitude cells covering the region of interest. The region +:math:`\boldsymbol{R}` is then the product of the spatial rate +:math:`\boldsymbol{S}` and the binned magnitude rate +:math:`\boldsymbol{M}`, + +.. math:: \boldsymbol{R} = \boldsymbol{M} \times \boldsymbol{S}. + +A forecast :math:`\boldsymbol{\Lambda}` can be fully specified as the +expected number of events (or rate) in each space-magnitude bin +(:math:`m_i, s_j`) covering the region :math:`\boldsymbol{R}` and +therefore can be written as + +.. math:: \boldsymbol{\Lambda} = \{ \lambda_{m_i, s_j}| m_i \in \boldsymbol{M}, s_j \in \boldsymbol{S} \}, + +where :math:`\lambda_{m_i, s_j}` is the expected rate of events in +magnitude bin :math:`m_i` and spatial bin :math:`s_j`. The observed +catalogue of events :math:`\boldsymbol{\Omega}` we use to evaluate the +forecast is similarly discretised into the same space-magnitude bins, +and can be described as + +.. math:: \boldsymbol{\Omega} = \{ \omega_{m_i, s_j}| m_i \in \boldsymbol{M}, s_j \in \boldsymbol{S} \}, + +where :math:`\omega_{m_i, s_j}` is the observed number of +events in spatial cell :math:`s_j` and magnitude bin :math:`m_i`. The +magnitude bins are specified in the forecast: typically these are in 0.1 +increments and this is the case in the examples we use here. These +examples use the Helmstetter et al (2007) smoothed seismicity forecast +(including aftershocks), testing over a 5 year period between 2010 and +2015. + +.. code:: ipython3 + + # Set up experiment parameters + start_date = time_utils.strptime_to_utc_datetime('2010-01-01 00:00:00.0') + end_date = time_utils.strptime_to_utc_datetime('2015-01-01 00:00:00.0') + + # Loads from the PyCSEP package + helmstetter = csep.load_gridded_forecast( + datasets.helmstetter_aftershock_fname, + start_date=start_date, + end_date=end_date, + name='helmstetter_aftershock' + ) + + # Set up evaluation catalog + catalog = csep.query_comcat(helmstetter.start_time, helmstetter.end_time, + min_magnitude=helmstetter.min_magnitude) + + # Filter evaluation catalog + catalog = catalog.filter_spatial(helmstetter.region) + + # Add seed for reproducibility in simulations + seed = 123456 + + # Number of simulations for Poisson consistency tests + nsim = 100000 + + +.. parsed-literal:: + + Fetched ComCat catalog in 5.9399449825286865 seconds. + + Downloaded catalog from ComCat with following parameters + Start Date: 2010-01-10 00:27:39.320000+00:00 + End Date: 2014-08-24 10:20:44.070000+00:00 + Min Latitude: 31.9788333 and Max Latitude: 41.1431667 + Min Longitude: -125.3308333 and Max Longitude: -115.0481667 + Min Magnitude: 4.96 + Found 24 events in the ComCat catalog. + + +Consistency tests +~~~~~~~~~~~~~~~~~ + +The consistency tests evaluate the consistency of a forecast against +observed earthquakes. These tests were developed across a range of +experiments and publications (Schorlemmer et al, 2007; Zechar et al +2010; Werner et al, 2011a). The consistency tests are based on the +likelihood of observing the catalogue (actual recorded events) given the +forecast. Since the space-magnitude bins are assumed to be independent, +the joint-likelihood of observing the events in each individual bin +given the specified forecast can be written as + +.. math:: Pr(\omega_1 | \lambda_1) Pr(\omega_2 | \lambda_2)...Pr(\omega_n | \lambda_n) = \prod_{m_i , s_j \in \boldsymbol{R}} f_{m_i, s_j}(\omega(m_i, s_j)), + +where :math:`f_{m_i, s_j}` specifies the probability distribution in +each space-magnitude bin. We prefer to use the joint log-likelihood in +order to sum log-likelihoods rather than multiply the likelihoods. The +joint log-likelihood can be written as: + +.. math:: L(\boldsymbol{\Omega} | \boldsymbol{\Lambda}) = \sum_{m_i , s_j \in \boldsymbol{R}} log(f_{m_i, s_j}(\omega(m_i, s_j)). + +The likelihood of the observations, :math:`\boldsymbol{\Omega}`, given +the forecast :math:`\boldsymbol{\Lambda}` is the sum over all +space-magnitude bins of the log probabilities in individual cells of the +forecast. Grid-based forecasts are specified by the expected number of +events in a discrete space-magnitude bin. From the maximum entropy +principle, we assign a Poisson distribution in each bin. In this case, +the probability of an event occurring is independent of the time since +the last event, and events occur at a rate :math:`\lambda`. The +Poissonian joint log-likelihood can be written as + +.. math:: L(\boldsymbol{\Omega} | \boldsymbol{\Lambda}) = \sum_{m_i , s_j \in \boldsymbol{R}} -\lambda(m_i, s_j) + \omega(m_i, s_j)\log(\lambda(m_i, s_j)) - log(\omega(m_i, s_j)!), + +where :math:`\lambda(m_i, s_j)` and :math:`\omega(m_i, s_j)` are the +expected counts from the forecast and observed counts in cell +:math:`m_i, s_j` respectively. We can calculate the likelihood directly +given the forecast and discretised observations. + +Forecast uncertainty + +A simulation based approach is used to account for uncertainty in the +forecast. We simulate realizations of catalogs that are consistent with +the forecast to obtain distributions of scores. In the pyCSEP package, +as in the original CSEP tests, simulation is carried out using the +cumulative probability density of the forecast obtained by ordering the +rates in each bin. We shall call :math:`F_{m_is_j}` the cumulative +probability density in cell :math:`(m_i, s_j)`. The simulation approach +then works as follows: + +- For each forecast bin, draw a random number :math:`z` from a uniform + distribution between 0 and 1 +- Assign this event to a space-magnitude bin through the inverse + cumulative density distribution at this point + :math:`F^{-1}_{m_i, s_j}(z)` +- Iterate over all simulated events to generate a catalog containing + :math:`N_{sim}` events consistent with the forecast + +For each of these tests, we can plot the distribution of likelihoods +computed from theses simulated catalogs relative to the observations +using the ``plots.plot_poisson_consistency_test`` function. We also +calculate a quantile score to diagnose a particular forecast with +repsect. The number of simulations can be supplied to the Poisson +consistency test functions using the ``num_simulations`` argument: for +best results we suggest 100,000 simulations to ensure convergence. + +Scoring the tests + +Through simulation (as described above), we obtain a set of simulated +catalogs :math:`\{\hat{\boldsymbol{\Omega}}\}`. Each catalogue can be +written as + +.. math:: \hat{\boldsymbol{\Omega}}_x =\{ \hat{\lambda}_x(m_i, s_j)|(m_i, s_j) \in \boldsymbol{R}\}, + +where :math:`\hat{\lambda}_x(m_i, s_j)` is the number of +simulated earthquakes in cell :math:`(m_i, s_j)` of (simulated) catalog +:math:`x` that is consistent with the forecast :math:`\Lambda`. We then +compute the joint log-likelihood for each simulated catalogue +:math:`\hat{L}_x = L(\hat{\Omega}_x|\Lambda)`. The joint log-likelihood +for each simulated catalogue given the forecast gives us a set of +log-likelihoods :math:`\{\hat{\boldsymbol{L}}\}` that represents the +range of log-likelihoods consistent with the forecast. We then compare +our simulated log-likelihoods with the observed log-likelihood +:math:`L_{obs} = L(\boldsymbol{\Omega}|\boldsymbol{\Lambda})` using a +quantile score. + +The quantile score is defined by the fraction of simulated joint +log-likelihoods less than or equal to the observed likelihood. + +.. math:: \gamma = \frac{ |\{ \hat{L}_x | \hat{L}_x \le L_{obs}\} |}{|\{ \hat{\boldsymbol{L}} \}|} + +Whether a forecast can be said to pass an evaluation depends on the +significance level chosen for the testing process. The quantile score +explicitly tells us something about the significance of the result: the +observation is consistent with the forecast with :math:`100(1-\gamma)\%` +confidence (Zechar, 2011). Low :math:`\gamma` values demonstrate that +the observed likelihood score is less than most of the simulated +catalogs. The consistency tests, excluding the N-test, are considered to +be one-sided tests: values which are too small are ruled inconsistent +with the forecast, but very large values may not necessarily be +inconsistent with the forecast and additional testing should be used to +further clarify this (Schorlemmer et al, 2007). + +Different CSEP experiments have used different sensitivity values. +Schorlemmer et al (2010b) consider :math:`\gamma \lt 0.05` while the +implementation in the Italian CSEP testing experiment uses +:math:`\gamma` < 0.01 (Taroni et al, 2018). However, the consistency +tests are most useful as diagnostic tools where the quantile score +assesses the level of consistency between observations and data. +Temporal variations in seismicity make it difficult to formally reject a +model from a consistency test over a single evaluation period. + +Likelihood-test (L-test) +^^^^^^^^^^^^^^^^^^^^^^^^ + +Aim: Evaluate the likelihood of observed events given the provided +forecast - this includes the rate, spatial distribution and magnitude +components of the forecast. + +Method: The L-test is one of the original forecast tests described in +Schorlemmer et al, 2007. The likelihood of the observation given the +model is described by a Poisson likelihood function in each cell and the +total joint likelihood described by the product over all bins, or the +sum of the log-likelihoods (see above, or Zechar 2011 for more details). + +Note: The likelihood scores are dominated by the rate-component of the +forecast. This causes issues in scoring forecasts where the expected +number of events are different from the observed number of events. We +suggest to use the N-test (below) and CL-test (below) independently to +score the rate component, and spatial-magnitude components of the +forecast. This behavior can be observed by comparing the CL-test and +N-test results with the L-test results in this notebook. Since the +forecast overpredicts the rate of events during this testing period, the +L-test provides a passing score even though the space-magnitude and rate +components perform poorly during this evaluation period. + +pyCSEP implementation + +pyCSEP uses the forecast and catalog and returns the test distribution, +observed statistic and quantile score, which can be accessed from the +``likelihood_test_result`` object. We can pass this directly to the +plotting function, specifying that the test should be one-sided. + +.. code:: ipython3 + + likelihood_test_result = poisson.likelihood_test( + helmstetter, + catalog, + seed=seed, + num_simulations=nsim + ) + ax = plots.plot_poisson_consistency_test( + likelihood_test_result, + one_sided_lower=True, + plot_args={'title': r'$\mathcal{L}-\mathrm{test}$', 'xlabel': 'Log-likelihood'} + ) + + + +.. image:: static/output_6_0.png + + + +pyCSEP plots the resulting :math:`95\%` range of likelihoods returned by +the simulation with the black bar by default. The observed likelihood +score is shown by a green square where the forecast passes the test and +a red circle where the observed likelihood is outside the likelihood +distribution. + +CL-test +^^^^^^^ + +Aim: The original likelihood test described above gives a result that +combines the spatial, magnitude and number components of a forecast. The +conditional likelihood or CL-Test was developed to test the spatial and +magnitude performance of a forecast without the influence of the number +of events (Werner et al. 2011a, 2011b). By conditioning the test +distribution on the observed number of events we elimiate the dependency +with the forecasted number of events as described above. + +| Method +| The CL-test is computed in the same way as the L-test, but with the + number of events normalised to the observed catalog :math:`N_{obs}` + during the simulation stage. The quantile score is then calculated + similarly such that + +.. math:: \gamma_{CL} = \frac{ |\{ \hat{CL}_x | \hat{CL}_x \le CL_{obs}\} |}{|\{ \hat{\boldsymbol{CL}} \}|}. + +Implementation in pyCSEP + +.. code:: ipython3 + + cond_likelihood_test_result = poisson.conditional_likelihood_test( + helmstetter, + catalog, + seed=seed, + num_simulations=nsim + ) + ax = plots.plot_poisson_consistency_test( + cond_likelihood_test_result, + one_sided_lower=True, + plot_args = {'title': r'$CL-\mathrm{test}$', 'xlabel': 'conditional log-likelihood'} + ) + + + +.. image:: static/output_9_0.png + + +Again, the :math:`95\%` confidence range of likelihoods is shown by the +black bar, and the symbol reflects the observed conditional-likelihood +score. In this case, the observed conditional-likelihood is shown with +the red circle, which falls outside the range of likelihoods simulated +from the forecast. To understand why the L- and CL-tests give different +results, consider the results of the N-test and S-test in the following +sections. + +N-test +^^^^^^ + +Aim: The number or N-test is the most conceptually simple test of a +forecast: To test whether the number of observed events is consistent +with that of the forecast. + +Method: The originial N-test was introduced by Schorlemmer et al (2007) +and modified by Zechar et al (2010). The observed number of events is +given by, + +.. math:: N_{obs} = \sum_{m_i, s_j \in R} \omega(m_i, s_j). + +Using the simulations described above, the expected number of events is +calculated by summing the simulated number of events over all grid cells + +.. math:: \hat{N_x} = \sum_{m_i, s_j \in R} \hat{\omega}_x(m_i, s_j), + +where :math:`\hat{\omega}_x(m_i, s_j)` is the simulated number of events +in catalog :math:`x` in spatial cell :math:`s_j` and magnitude cell +:math:`m_i`, generating a set of simulated rates :math:`\{ \hat{N} \}`. +We can then calculate the probability of i) observing at most +:math:`N_{obs}` events and ii) of observing at least :math:`N_{obs}` +events. These probabilities can be written as: + +.. math:: \delta_1 = \frac{ |\{ \hat{N_x} | \hat{N_x} \le N_{obs}\} |}{|\{ \hat{N} \}|} + +and + +.. math:: \delta_2 = \frac{ |\{ \hat{N_x} | \hat{N_x} \ge N_{obs}\} |}{|\{ \hat{N} \}|} + +If a forecast is Poisson, the expected number of events in the forecast +follows a Poisson distribution with expectation +:math:`N_{fore} = \sum_{m_i, s_j \in R} \lambda(m_i, s_j)`. The +cumulative distribution is then a Poisson cumulative distribution: + +.. math:: F(x|N_{fore}) = \exp(-N_{fore}) \sum^{x}_{i=0} \frac{(N_{fore})^i}{i!} + +which can be used directly without the need for simulations. The N-test +quantile score is then + +.. math:: \delta_1 = 1 - F((N_{obs}-1)|N_{fore}), + +and + +.. math:: \delta_2 = F(N_{obs}|N_{fore}). + +The original N-test considered only :math:`\delta_2` and it’s complement +:math:`1-\delta_2`, which effectively tested the probability of at most +:math:`N_{obs}` events and more than :math:`N_{obs}` events. Very small +or very large values (<0.025 or > 0.975 respectively) were considered to +be inconsistent with the forecast in Schorlemmer et al (2010). However +the approach above aims to test something subtely different, that is at +least :math:`N_{obs}` events and at most :math:`N_{obs}` events. Zechar +et al (2010a) recommends testing both :math:`\delta_1` and +:math:`\delta_2` with an effective significance of have the required +significance level, so for a required significance level of 0.05, a +forecast is consistent if both :math:`\delta_1` and :math:`\delta_2` are +greater than 0.025. A very small :math:`\delta_1` suggest the rate is +too low while a very low :math:`\delta_2` suggests a rate which is too +high to be consistent with observations. + +Implementation in pyCSEP + +pyCSEP uses the Zechar et al (2010) version of the N-test and the +cumulative Poisson approach to estimate the range of expected events +from the forecasts, so does not implement a simulation in this case. The +upper and lower bounds for the test are determined from the cumulative +Poisson distribution. ``number_test_result.quantile`` will return both +:math:`\delta_1` and :math:`\delta_2` values. + +.. code:: ipython3 + + number_test_result = poisson.number_test(helmstetter, catalog) + ax = plots.plot_poisson_consistency_test( + number_test_result, + plot_args={'xlabel':'Number of events'} + ) + + + +.. image:: static/output_13_0.png + + +In this case, the black bar shows the :math:`95\%` interval for the +number of events in the forecast. The actual observed number of events +is shown by the green box, which just passes the N-test in this case: +the forecast generallly expects more events than are observed in +practice, but the observed number falls just within the lower limits of +what is expected so the forecast (just!) passes the N-test. + +M-test +^^^^^^ + +Aim: Establish consistency (or lack thereof) of observed event +magnitudes with forecast magnitudes. + +Method: The M-test is first described in Zechar et al. (2010) and aims +to isolate the magnitude component of a forecast. To do this, we sum +over the spatial bins and normalise so that the sum of events matches +the observations. + +.. math:: \hat{\boldsymbol{\Omega}}^m = \big{\{}\omega^{m}(m_i)| m_i \in \boldsymbol{M}\big{\}}, + +where + +.. math:: \omega^m(m_i) = \sum_{s_j \in \boldsymbol{S}} \omega(m_i, s_j), + +and + +.. math:: \boldsymbol{\Lambda}^m = \big{\{} \lambda^m(m_i)| m_i \in \boldsymbol{M} \big{\}}, + +where + +.. math:: \lambda^m(m_i) = \frac{N_{obs}}{N_{fore}}\sum_{s_j \in \boldsymbol{S}} \lambda\big{(}m_i, s_j\big{)}. + +Then we compute the joint log-likelihood as we did for the L-test: + +.. math:: M = L(\boldsymbol{\Omega}^m | \boldsymbol{\Lambda}^m) + +We then wish to compare this with the distribution of simulated +log-likelihoods, this time keep the number of events fixed to + +:math:`N_{obs}`. Then for each simulated catalogue, +:math:`\hat{M}_x = L(\hat{\boldsymbol{\Omega}}^m | \boldsymbol{\Lambda}^m)` + +Quantile score: The final test statistic is again the fraction of +observed log likelihoods within the range of the simulated log +likelihood values: + +.. math:: \kappa = \frac{ |\{ \hat{M_x} | \hat{M_x} \le M\} |}{|\{ \hat{M} \}|} + +and the observed magnitudes are inconsistent with the forecast if +:math:`\kappa` is less than the significance level. + +pyCSEP implementation + +.. code:: ipython3 + + mag_test_result = poisson.magnitude_test( + helmstetter, + catalog, + seed=seed, + num_simulations=nsim + ) + ax = plots.plot_poisson_consistency_test( + mag_test_result, + one_sided_lower=True, + plot_args={'xlabel':'Normalized likelihood'} + ) + + + +.. image:: static/output_16_0.png + + +In this example, the forecast passes the M-test, demonstrating that the +magnitude distribution in the forecast is consistent with observed +events. This is shown by the green square marking the joint +log-likelihood for the observed events. + +S-test +^^^^^^ + +Aim: The spatial or S-test aims to establish consistency (or lack +thereof) of observed event locations with a forecast. It is originally +defined in Zechar et al (2010). + +Method: Similar to the M-test, but in this case we sum over all +magnitude bins. + +.. math:: \hat{\boldsymbol{\Omega}^s} = \{\omega^s(s_j)| s_j \in \boldsymbol{S}\}, + +where + +.. math:: \omega^s(s_j) = \sum_{m_i \in \boldsymbol{M}} \omega(m_i, s_j), + +and + +.. math:: \boldsymbol{\Lambda}^s = \{ \lambda^s(s_j)| s_j \in \boldsymbol{S} \}, + +where + +.. math:: \lambda^s(s_j) = \frac{N_{obs}}{N_{fore}}\sum_{m_i \in M} \lambda(m_i, s_j). + +Then we compute the joint log-likelihood as we did for the L-test or the +M-test: + +.. math:: S = L(\boldsymbol{\Omega}^s | \boldsymbol{\Lambda}^s) + +We then wish to compare this with the distribution of simulated +log-likelihoods, this time keeping the number of events fixed to +:math:`N_{obs}`. Then for each simulated catalogue, +:math:`\hat{S}_x = L(\hat{\boldsymbol{\Omega}}^s | \boldsymbol{\Lambda}^s)` + +The final test statistic is again the fraction of observed log +likelihoods within the range of the simulated log likelihood values: + +.. math:: \zeta = \frac{ |\{ \hat{S_x} | \hat{S_x} \le S\} |}{|\{ \hat{S} \}|} + +and again the distinction between a forecast passing or failing the test +depends on our significance level. + +pyCSEP implementation + +The S-test is again a one-sided test, so we specify this when plotting +the result. + +.. code:: ipython3 + + spatial_test_result = poisson.spatial_test( + helmstetter, + catalog, + seed=seed, + num_simulations=nsim + ) + ax = plots.plot_poisson_consistency_test( + spatial_test_result, + one_sided_lower=True, + plot_args = {'xlabel':'normalized spatial likelihood'} + ) + + +.. image:: static/output_19_0.png + + +The Helmstetter model fails the S-test as the observed spatial +likelihood falls in the tail of the simulated likelihood distribution. +Again this is shown by a coloured symbol which highlights whether the +forecast model passes or fails the test. + +Forecast comparison tests +~~~~~~~~~~~~~~~~~~~~~~~~~ + +The consistency tests above check whether a forecast is consistent with +observations, but do not provide a straightforward way to compare two +different forecasts. A few suggestions for this focus on the information +gain of one forecast relative to another (Harte and Vere-Jones 2005, +Imoto and Hurukawa, 2006, Imoto and Rhoades, 2010, Rhoades et al 2011). +The T-test and W-test implementations for earthquake forecast comparison +are first described in Rhoades et al. (2011). + +The information gain per earthquake (IGPE) of model A compared to model +B is defined by :math:`I_{N}(A, B) = R/N` where R is the rate-corrected +log-likelihood ratio of models A and B gven by + +.. math:: R = \sum_{k=1}^{N}\big{(}\log\lambda_A(i_k) - \log \lambda_B(i_k)\big{)} - \big{(}\hat{N}_A - \hat{N}_B\big{)} + +If we set :math:`X_i=\log\lambda_A(k_i)` and +:math:`Y_i=\log\lambda_B(k_i)` then we can define the information gain +per earthquake (IGPE) as + +.. math:: I_N(A, B) = \frac{1}{N}\sum^N_{i=1}\big{(}X_i - Y_i\big{)} - \frac{\hat{N}_A - \hat{N}_B}{N} + +If :math:`I(A, B)` differs significantly from 0, the model with the +lower likelihood can be rejected in favour of the other. + +t-test + +If :math:`X_i - Y_i` are independent and come from the same normal +population with mean :math:`\mu` then we can use the classic paired +t-test to evaluate the null hypothesis that +:math:`\mu = (\hat{N}_A - \hat{N}_B)/N` against the alternative +hypothesis :math:`\mu \ne (\hat{N}_A - \hat{N}_B)/N`. To implement this, +we let :math:`s` denote the sample variance of :math:`(X_i - Y_i)` such +that + +.. math:: s^2 = \frac{1}{N-1}\sum^N_{i=1}\big{(}X_i - Y_i\big{)}^2 - \frac{1}{N^2 - N}\bigg{(}\sum^N_{i=1}\big{(}X_i - Y_i\big{)}\bigg{)}^2 + +Under the null hypothesis +:math:`T = I_N(A, B)\big{/}\big{(}s/\sqrt{N}\big{)}` has a +t-distribution with :math:`N-1` degrees of freedom and the null +hypothesis can be rejected if :math:`|T|` exceeds a critical value of +the :math:`t_{N-1}` distribution. The confidence intervals for +:math:`\mu - (\hat{N}_A - \hat{N}_B)/N` can then be constructed with the +form :math:`I_N(A,B) \pm ts/\sqrt{N}` where t is the appropriate +quantile of the :math:`t_{N-1}` distribution. + +W-test + +An alternative to the t-test is the Wilcoxan signed-rank test or W-test. +This is a non-parameteric alternative to the t-test which can be used if +we do not feel the assumption of normally distributed differences in +:math:`X_i - Y_i` is valid. This assumption might b particularly poor +when we have small sample sizes. The W-test instead depends on the +(weaker) assumption that :math:`X_i - Y_i` is symmetric and tests +whether the meadian of :math:`X_i - Y_i` is equal to +:math:`(\hat{N}_A - \hat{N}_B)/N`. The W-test is less powerful than the +T-test for normally distributed differences and cannot reject the null +hypothesis (with :math:`95\%` confidence) for very small sample sizes +(:math:`N \leq 5`). + +The t-test becomes more accurate as :math:`N \rightarrow \infty` due to +the central limit theorem and therefore the t-test is considered +dependable for large :math:`N`. Where :math:`N` is small, a model might +only be considered more informative if both the t- and W-test results +agree. + +Implementation in pyCSEP + +The t-test and W-tests are implemented in pyCSEP as below. + +.. code:: ipython3 + + helmstetter_ms = csep.load_gridded_forecast( + datasets.helmstetter_mainshock_fname, + name = "Helmstetter Mainshock" + ) + + t_test = poisson.paired_t_test(helmstetter, helmstetter_ms, catalog) + w_test = poisson.w_test(helmstetter, helmstetter_ms, catalog) + comp_args = {'title': 'Paired T-test Result', + 'ylabel': 'Information gain', + 'xlabel': '', + 'xticklabels_rotation': 0, + 'figsize': (6,4)} + + ax = plots.plot_comparison_test([t_test], [w_test], plot_args=comp_args) + + + +.. image:: static/output_22_0.png + + +The first argument to the ``paired_t_test`` function is taken as model A +and the second as our basline model, or model B. When plotting the +result, the horizontal dashed line indicates the performance of model B +and the vertical bar shows the confidence bars for the information gain +:math:`I_N(A, B)` associated with model A relative to model B. In this +case, the model with aftershocks performs statistically worse than the +benchmark model. We note that this comparison is used for demonstation +purposes only. + +Catalog-based forecast tests +---------------------------- + +Catalog-based forecast tests evaluate forecasts using simulated outputs +in the form of synthetic earthquake catalogs. Thus, removing the need +for the Poisson approximation and simulation procedure used with +grid-based forecasts. We know that observed seismicity is overdispersed +with respect to a Poissonian model due to spatio-temporal clustering. +Overdispersed models are more likely to be rejected by the original +Poisson-based CSEP tests (Werner et al, 2011a). This modification of the +testing framework allows for a broader range of forecast models. The +distribution of realizations is then compared with observations, similar +to in the grid-based case. These tests were developed by Savran et al +2020, who applied them to test forecasts following the 2019 Ridgecrest +earthquake in Southern California. + +In the following text, we show how catalog-based forecasts are defined. +Again we begin by defining a region :math:`\boldsymbol{R}` as a function +of some magnitude range :math:`\boldsymbol{M}`, spatial domain +:math:`\boldsymbol{S}` and time period :math:`\boldsymbol{T}` + +.. math:: \boldsymbol{R} = \boldsymbol{M} \times \boldsymbol{S} \times \boldsymbol{T}. + +An earthquake :math:`e` can be described by a magnitude :math:`m_i` at +some location :math:`s_j` and time :math:`t_k`. A catalog is simply a +collection of earthquakes, thus the observed catalog can be written as + +.. math:: \Omega = \big{\{}e_n \big{|} n= 1...N_{obs}; e_n \in \boldsymbol{R} \big{\}}, + +and a forecast is then specified as a collection of synthetic catalogs +containing events :math:`\hat{e}_{nj}` in domain :math:`\boldsymbol{R}`, +as + +.. math:: \boldsymbol{\Lambda} \equiv \Lambda_j = \{\hat{e}_{nj} | n = 1... N_j, j= 1....J ;\hat{e}_{nj} \in \boldsymbol{R} \}. + +That is, a forecast consists of :math:`J` simulated catalogs each +containing :math:`N_j` events, described in time, space and magnitude +such that :math:`\hat{e}_{nj}` describes the :math:`n`\ th synthetic +event in the :math:`j`\ th synthetic catalog :math:`\Lambda_j` + +When using simulated forecasts in pyCSEP, we must first explicitly +specify the forecast region by specifying the spatial domain and +magnitude regions as below. In effect, these are filters applied to the +forecast and observations to retain only the events in +:math:`\boldsymbol{R}`. The examples in this section are catalog-based +forecast simulations for the Landers earthquake and aftershock sequence +generated using UCERF3-ETAS (Field et al, 2017). + +.. code:: ipython3 + + # Define the start and end times of the forecasts + start_time = time_utils.strptime_to_utc_datetime("1992-06-28 11:57:35.0") + end_time = time_utils.strptime_to_utc_datetime("1992-07-28 11:57:35.0") + + # Magnitude bins properties + min_mw = 4.95 + max_mw = 8.95 + dmw = 0.1 + + # Create space and magnitude regions. + magnitudes = regions.magnitude_bins(min_mw, max_mw, dmw) + region = regions.california_relm_region() + space_magnitude_region = regions.create_space_magnitude_region( + region, + magnitudes + ) + + # Load forecast + forecast = csep.load_catalog_forecast( + datasets.ucerf3_ascii_format_landers_fname, + start_time = start_time, + end_time = end_time, + region = space_magnitude_region, + apply_filters = True + ) + + # Compute expected rates + forecast.filters = [ + f'origin_time >= {forecast.start_epoch}', + f'origin_time < {forecast.end_epoch}' + ] + _ = forecast.get_expected_rates(verbose=False) + + # Obtain Comcat catalog and filter to region + comcat_catalog = csep.query_comcat( + start_time, + end_time, + min_magnitude=forecast.min_magnitude + ) + + # Filter observed catalog using the same region as the forecast + comcat_catalog = comcat_catalog.filter_spatial(forecast.region) + + +.. parsed-literal:: + + Fetched ComCat catalog in 0.31937098503112793 seconds. + + Downloaded catalog from ComCat with following parameters + Start Date: 1992-06-28 12:00:45+00:00 + End Date: 1992-07-24 18:14:36.250000+00:00 + Min Latitude: 33.901 and Max Latitude: 36.705 + Min Longitude: -118.067 and Max Longitude: -116.285 + Min Magnitude: 4.95 + Found 19 events in the ComCat catalog. + + +Number Test +~~~~~~~~~~~ + +Aim: As above, the number test aims to evaluate if the number of +observed events is consistent with the forecast. + +Method: The observed statistic in this case is given by +:math:`N_{obs} = |\Omega|`, which is simply the number of events in the +observed catalog. To build the test distribution from the forecast, we +simply count the number of events in each simulated catalog. + +.. math:: N_{j} = |\Lambda_c|; j = 1...J + +As in the gridded test above, we can then evaluate the probabilities of +at least and at most N events, in this case using the empirical +cumlative distribution function of :math:`F_N`: + +.. math:: \delta_1 = P(N_j \geq N_{obs}) = 1 - F_N(N_{obs}-1) + +and + +.. math:: \delta_2 = P(N_j \leq N_{obs}) = F_N(N_{obs}) + +Implementation in pyCSEP + +.. code:: ipython3 + + number_test_result = catalog_evaluations.number_test( + forecast, + comcat_catalog, + verbose=False + ) + ax = number_test_result.plot() + + + +.. image:: static/output_27_0.png + + +Plotting the number test result of a simulated catalog forecast displays +a histogram of the numbers of events :math:`\hat{N}_j` in each simulated +catalog :math:`j`, which makes up the test distribution. The test +statistic is shown by the dashed line - in this case it is the number of +observed events in the catalog :math:`N_{obs}`. + +Magnitude Test +~~~~~~~~~~~~~~ + +Aim: The magnitude test aims to test the consistency of the observed +frequency-magnitude distribution with that in the simulated catalogs +that make up the forecast. + +Method: The catalog-based magnitude test is implemented quite +differently to the grid-based equivalent. We first define the union +catalog :math:`\Lambda_U` as the union of all simulated catalogs in the +forecast. Formally: + +.. math:: \Lambda_U = \{ \lambda_1 \cup \lambda_2 \cup ... \cup \lambda_j \} + +| so that the union catalog contains all events across all simulated + catalogs for a total of + :math:`N_U = \sum_{j=1}^{J} \big{|}\lambda_j\big{|}` events. +| We then compute the following histograms discretised to the magnitude + range and magnitude step size (specified earlier for pyCSEP): 1. the + histogram of the union catalog magnitudes :math:`\Lambda_U^{(m)}` 2. + Histograms of magnitudes in each of the individual simulated catalogs + :math:`\lambda_j^{(m)}` 3. the histogram of the observed catalog + magnitudes :math:`\Omega^{(m)}` + +The histograms are normalized so that the total number of events across +all bins is equal to the observed number. The observed statistic is then +calculated as the sum of squared logarithmic residuals between the +normalised observed magnitudes and the union histograms. This statistic +is related to the Kramer von-Mises statistic. + +.. math:: d_{obs}= \sum_{k}\Bigg(\log\Bigg[\frac{N_{obs}}{N_U} \Lambda_U^{(m)}(k) + 1\Bigg]- \log\Big[\Omega^{(m)}(k) + 1\Big]\Bigg)^2 + +where :math:`\Lambda_U^{(m)}(k)` and :math:`\Omega^{(m)}(k)` +represent the count in the :math:`k`\ th bin of the magnitude-frequency +distribution in the union and observed catalogs respectively. We add +unity to each bin to avoid :math:`\log(0)`. We then build the test +distribution from the catalogs in :math:`\boldsymbol{\Lambda}`: + +.. math:: D_j = \sum_{k}\Bigg(\log\Bigg[\frac{N_{obs}}{N_U} \Lambda_U^{(m)}(k) + 1\Bigg]- \log\Bigg[\frac{N_{obs}}{N_j}\Lambda_j^{(m)}(k) + 1\Bigg]\Bigg)^2; j= 1...J + +where :math:`\lambda_j^{(m)}(k)` represents the count in the +:math:`k`\ th bin of the magnitude-frequency distribution of the +:math:`j`\ th catalog. + +The quantile score can then be calculated using the empirical CDF such +that + +.. math:: \gamma_m = F_D(d_{obs})= P(D_j \leq d_{obs}) + +| Implementation in pyCSEP +| Hopefully you now see why it was necessary to specify our magnitude + range explicitly when we set up the catalog-type testing - we need to + makes sure the magnitudes are properly discretised for the model we + want to test. + +.. code:: ipython3 + + magnitude_test_result = catalog_evaluations.magnitude_test( + forecast, + comcat_catalog,verbose=False + ) + ax = magnitude_test_result.plot(plot_args={'xy': (0.6,0.7)}) + + + +.. image:: static/output_30_0.png + + +The histogram shows the resulting test distribution with :math:`D^*` +calculated for each simulated catalog as described in the method above. +The test statistic :math:`\omega = d_{obs}` is shown with the dashed +horizontal line. The quantile score for this forecast is +:math:`\gamma = 0.66`. + +Pseudo-likelihood test +~~~~~~~~~~~~~~~~~~~~~~ + +Aim : The pseudo-likelihood test aims to evaluate the likelihood of a +forecast given an observed catalog. + +Method : The pseudo-likelihood test has similar aims to the grid-based +likelihood test above, but its implementation differs in a few +significant ways. Firstly, it does not compute an actual likelihood +(hence the name pseudo-likelihood), and instead of aggregating over +cells as in the grid-based case, the pseudo-likelihood test aggregates +likelihood over target event likelihood scores (so likelihood score per +target event, rather than likelihood score per grid cell). The most +important difference, however, is that the pseudo-likelihood tests do +not use a Poisson likelihood. + +The pseudo-likelihood approach is based on the continuous point process +likelihood function. A continuous marked space-time point process can be +specified by a conditional intensity function +:math:`\lambda(\boldsymbol{e}|H_t)`, in which :math:`H_t` describes the +history of the process in time. The log-likelihood function for any +point process in :math:`\boldsymbol{R}` is given by + +.. math:: L = \sum_{i=1}^{N} \log \lambda(e_i|H_t) - \int_{\boldsymbol{R}}\lambda(\boldsymbol{e}|H_t)d\boldsymbol{R} + +Not all models will have an explicit likelihood function, so instead we +approximate the expectation of :math:`\lambda(e|H_t)` using the forecast +catalogs. The approximate rate density is defined as the conditional +expectation given a discretised region :math:`R_d` of the continuous +rate + +.. math:: \hat{\lambda}(\boldsymbol{e}|H_t) = E\big[\lambda(\boldsymbol{e}|H_t)|R_d\big] + +We still regard the model as continuous, but the rate density is +approximated within a single cell. This is analogous to the gridded +approach where we count the number of events in discrete cells. The +pseudo-loglikelihood is then + +.. math:: \hat{L} = \sum_{i=1}^N \log \hat{\lambda}(e_i|H_t) - \int_R \hat{\lambda}(\boldsymbol{e}|H_t) dR + +and we can write the approximate rate density as + +.. math:: \hat{\lambda}(\boldsymbol{e}|H_t) = \sum_M \hat{\lambda}(\boldsymbol{e}|H_t), + +where we take the sum over all magnitude bins :math:`M`. We can +calculate observed pseudolikelihood as + +.. math:: \hat{L}_{obs} = \sum_{i=1}^{N_{obs}} \log \hat{\lambda}_s(k_i) - \bar{N}, + +where :math:`\hat{\lambda}_s(k_i)` is the approximate rate density in +the :math:`k`\ th spatial cell and :math:`k_i` denotes the spatil cell +in which the :math:`i`\ th event occurs. :math:`\bar{N}` is the expected +number of events in :math:`R_d`. Similarly, we calculate the test +distribution as + +.. math:: \hat{L}_{j} = \Bigg[\sum_{i=1}^{N_{j}} \log\hat{\lambda}_s(k_{ij}) - \bar{N}\Bigg]; j = 1....J, + +where :math:`\hat{\lambda}_s(k_{ij})` describes the approximate rate +density of the :math:`i`\ th event in the :math:`j`\ th catalog. We can +then calculate the quantile score as + +.. math:: \gamma_L = F_L(\hat{L}_{obs})= P(\hat{L}_j \leq \hat{L}_{obs}). + +Implementation in pyCSEP + +.. code:: ipython3 + + pseudolikelihood_test_result = catalog_evaluations.pseudolikelihood_test( + forecast, + comcat_catalog, + verbose=False + ) + ax = pseudolikelihood_test_result.plot() + + + +.. image:: static/output_33_0.png + + +The histogram shows the test distribution of pseudolikelihood as +calculated above for each catalog :math:`j`. The dashed vertical line +shows the observed statistic :math:`\hat{L}_{obs} = \omega`. It is clear +that the observed statistic falls within the critical region of test +distribution, as reflected in the quantile score of +:math:`\gamma_L = 0.02`. + +Spatial test +~~~~~~~~~~~~ + +Aim: The spatial test again aims to isolate the spatial component of the +forecast and test the consistency of spatial rates with observed events. + +Method We perform the spatial test in the catalog-based approach in a +similar way to the grid-based spatial test approach: by normalising the +approximate rate density. In this case, we use the normalisation +:math:`\hat{\lambda}_s = \hat{\lambda}_s \big/ \sum_{R} \hat{\lambda}_s`. +Then the observed spatial test statistic is calculated as + +.. math:: S_{obs} = \Bigg[\sum_{i=1}^{N_{obs}} \log \hat{\lambda}_s^*(k_i)\Bigg]N_{obs}^{-1} + +in which :math:`\hat{\lambda}_s^*(k_i)` is the normalised approximate +rate density in the :math:`k`\ th cell corresponding to the +:math:`i`\ th event in the observed catalog :math:`\Omega`. Similarly, +we define the test distribution using + +.. math:: S_{c} = \bigg[\sum_{i=1}^{N_{j}} \log \hat{\lambda}_s^*(k_{ij})\bigg]N_{j}^{-1}; j= 1...J + +for each catalog j. Finally, the quantile score for the spatial test is +determined by once again comparing the observed and test distribution +statistics: + +.. math:: \gamma_s = F_s(\hat{S}_{obs}) = P (\hat{S}_j \leq \hat{S}_{obs}) + +Implementation in pyCSEP + +.. code:: ipython3 + + spatial_test_result = catalog_evaluations.spatial_test( + forecast, + comcat_catalog, + verbose=False + ) + ax = spatial_test_result.plot() + + + +.. image:: static/output_36_0.png + + +The histogram shows the test distribution of normalised +pseduo-likelihood computed for each simulated catalog :math:`j`. The +dashed vertical line shows the observed test statistic +:math:`s_{obs} = \omega = -5.88`, which is clearly within the test +distribution. The quantile score :math:`\gamma_s = 0.36` is also printed +on the figure by default. + +References +---------- + +Field, E. H., K. R. Milner, J. L. Hardebeck, M. T. Page, N. J. van der +Elst, T. H. Jordan, A. J. Michael, B. E. Shaw, and M. J. Werner (2017). +A spatiotemporal clustering model for the third Uniform California +Earthquake Rupture Forecast (UCERF3-ETAS): Toward an operational +earthquake forecast, Bull. Seismol. Soc. Am. 107, 1049–1081. + +Harte, D., and D. Vere-Jones (2005), The entropy score and its uses in +earthquake forecasting, Pure Appl. Geophys. 162 , 6-7, 1229-1253, DOI: +10.1007/ s00024-004-2667-2. + +Helmstetter, A., Y. Y. Kagan, and D. D. Jackson (2006). Comparison of +short-term and time-independent earthquake forecast models for southern +California, Bulletin of the Seismological Society of America 96 90-106. + +Imoto, M., and N. Hurukawa (2006), Assessing potential seismic activity +in Vrancea, Romania, using a stress-release model, Earth Planets Space +58 , 1511-1514. + +Imoto, M., and D.A. Rhoades (2010), Seismicity models of moderate +earthquakes in Kanto, Japan utilizing multiple predictive parameters, +Pure Appl. Geophys. 167, 6-7, 831-843, DOI: 10.1007/s00024-010-0066-4. + +Rhoades, D.A, D., Schorlemmer, M.C.Gerstenberger, A. Christophersen, J. +D. Zechar & M. Imoto (2011) Efficient testing of earthquake forecasting +models, Acta Geophysica 59 + +Savran, W., M. J. Werner, W. Marzocchi, D. Rhoades, D. D. Jackson, K. R. +Milner, E. H. Field, and A. J. Michael (2020). Pseudoprospective +evaluation of UCERF3-ETAS forecasts during the 2019 Ridgecrest Sequence, +Bulletin of the Seismological Society of America. + +Schorlemmer, D., and M.C. Gerstenberger (2007), RELM testing center, +Seismol. Res. Lett. 78, 30–36. + +Schorlemmer, D., M.C. Gerstenberger, S. Wiemer, D.D. Jackson, and D.A. +Rhoades (2007), Earthquake likelihood model testing, Seismol. Res. Lett. +78, 17–29. + +Schorlemmer, D., A. Christophersen, A. Rovida, F. Mele, M. Stucci and W. +Marzocchi (2010a). Setting up an earthquake forecast experiment in +Italy, Annals of Geophysics, 53, no.3 + +Schorlemmer, D., J.D. Zechar, M.J. Werner, E.H. Field, D.D. Jackson, and +T.H. Jordan (2010b), First results of the Regional Earthquake Likelihood +Models experiment, Pure Appl. Geophys., 167, 8/9, +doi:10.1007/s00024-010-0081-5. + +M. Taroni, W. Marzocchi, D. Schorlemmer, M. J. Werner, S. Wiemer, J. D. +Zechar, L. Heiniger, F. Euchner; Prospective CSEP Evaluation of 1‐Day, +3‐Month, and 5‐Yr Earthquake Forecasts for Italy. Seismological Research +Letters 2018;; 89 (4): 1251–1261. doi: +https://doi.org/10.1785/0220180031 + +Werner, M. J., A. Helmstetter, D. D. Jackson, and Y. Y. Kagan (2011a). +High-Resolution Long-Term and Short-Term Earthquake Forecasts for +California, Bulletin of the Seismological Society of America 101 +1630-1648 + +Werner, M.J. J.D. Zechar, W. Marzocchi, and S. Wiemer (2011b), +Retrospective evaluation of the five-year and ten-year CSEP-Italy +earthquake forecasts, Annals of Geophysics 53, no. 3, 11–30, +doi:10.4401/ag-4840. + +Zechar, 2011: Evaluating earthquake predictions and earthquake +forecasts: a guide for students and new researchers, CORSSA +(http://www.corssa.org/en/articles/theme_6/) + +Zechar, J.D., M.C. Gerstenberger, and D.A. Rhoades (2010a), +Likelihood-based tests for evaluating space-rate-magnitude forecasts, +Bull. Seis. Soc. Am., 100(3), 1184—1195, doi:10.1785/0120090192. + +Zechar, J.D., D. Schorlemmer, M. Liukis, J. Yu, F. Euchner, P.J. +Maechling, and T.H. Jordan (2010b), The Collaboratory for the Study of +Earthquake Predictability perspective on computational earthquake +science, Concurr. Comp-Pract. E., doi:10.1002/cpe.1519. diff --git a/_sources/index.rst.txt b/_sources/index.rst.txt new file mode 100644 index 00000000..ea21cb80 --- /dev/null +++ b/_sources/index.rst.txt @@ -0,0 +1,99 @@ +pyCSEP: Tools for Earthquake Forecast Developers +================================================ + +.. toctree:: + :maxdepth: 2 + :hidden: + :caption: Getting Started + + getting_started/installing + getting_started/core_concepts + getting_started/theory + +.. toctree:: + :maxdepth: 2 + :hidden: + :caption: Tutorials and Examples + + tutorials/catalog_filtering.rst + tutorials/plot_gridded_forecast.rst + tutorials/gridded_forecast_evaluation.rst + tutorials/quadtree_gridded_forecast_evaluation.rst + tutorials/working_with_catalog_forecasts.rst + tutorials/catalog_forecast_evaluation.rst + tutorials/plot_customizations.rst + +.. toctree:: + :maxdepth: 2 + :hidden: + :caption: User Guide + + concepts/catalogs + concepts/forecasts + concepts/evaluations + concepts/regions + +.. toctree:: + :maxdepth: 2 + :hidden: + :caption: Help & Reference + + reference/glossary + reference/publications + reference/roadmap + reference/developer_notes + reference/api_reference + + +*PyCSEP tools help earthquake forecast model developers evaluate their forecasts and provide the machinery to implement +experiments within CSEP testing centers.* + +About +----- +The Collaboratory for the Study of Earthquake Predictability (CSEP) supports an international effort to conduct earthquake +forecasting experiments. CSEP supports these activities by developing the cyberinfrastructure necessary to run earthquake +forecasting experiments including the statistical framework required to evaluate probabilistic earthquake forecasts. + +PyCSEP is a python library that provides tools for (1) evaluating probabilistic earthquake forecasts, (2) working +with earthquake catalogs in this context, and (3) creating visualizations. Official experiments that run in CSEP testing centers +will be implemented using the code provided by this package. + +Project Goals +------------- +1. Help modelers become familiar with formats, procedures, and evaluations used in CSEP Testing Centers. +2. Provide vetted software for model developers to use in their research. +3. Provide quantitative and visual tools to assess earthquake forecast quality. +4. Promote open-science ideas by ensuring transparency and availability of scientific code and results. +5. Curate benchmark models and data sets for modelers to conduct retrospective experiments of their forecasts. + +Contributing +------------ +We highly encourage users of this package to get involved in the development process. Any contribution is helpful, even +suggestions on how to improve the package, or additions to the documentation (those are particularly welcome!). Check out +the `Contribution guidelines `_ for a step by step on how to contribute to the project. If there are +any questions, please contact us! + +Contacting Us +------------- +* For general discussion and bug reports please post issues on the `pyCSEP GitHub `_. +* This project adheres to a `Code of Conduct `_. By participating you agree to follow its terms. + +List of Contributors +-------------------- +* Fabio Silva, Southern California Earthquake Center +* Philip Maechling, Southern California Earthquake Center +* William Savran, University of Nevada, Reno +* Pablo Iturrieta, GFZ Potsdam +* Khawaja Asim, GFZ Potsdam +* Han Bao, University of California, Los Angeles +* Kirsty Bayliss, University of Edinburgh +* Jose Bayona, University of Bristol +* Thomas Beutin, GFZ Potsdam +* Marcus Hermann, University of Naples 'Frederico II' +* Edric Pauk, Southern California Earthquake Center +* Max Werner, University of Bristol +* Danijel Schorlemmner, GFZ Potsdam + + + + diff --git a/_sources/reference/api_reference.rst.txt b/_sources/reference/api_reference.rst.txt new file mode 100644 index 00000000..6e3ce176 --- /dev/null +++ b/_sources/reference/api_reference.rst.txt @@ -0,0 +1,343 @@ +API Reference +============= + +This contains a reference document to the PyCSEP API. + +.. automodule:: csep + +.. :currentmodule:: csep + +Loading catalogs and forecasts +------------------------------ + +.. autosummary:: + :toctree: generated + + load_stochastic_event_sets + load_catalog + query_comcat + query_bsi + load_gridded_forecast + load_catalog_forecast + +Catalogs +-------- + +.. :currentmodule:: csep.core.catalogs + +.. automodule:: csep.core.catalogs + + +Catalog operations are defined using :class:`AbstractBaseCatalog` class. + +.. autosummary:: + :toctree: generated + + AbstractBaseCatalog + CSEPCatalog + UCERF3Catalog + +Catalog operations +------------------ + +Input and output operations for catalogs: + +.. autosummary:: + :toctree: generated + + CSEPCatalog.to_dict + CSEPCatalog.from_dict + CSEPCatalog.to_dataframe + CSEPCatalog.from_dataframe + CSEPCatalog.write_json + CSEPCatalog.load_json + CSEPCatalog.load_catalog + CSEPCatalog.write_ascii + CSEPCatalog.load_ascii_catalogs + CSEPCatalog.get_csep_format + CSEPCatalog.plot + +Accessing event information: + +.. autosummary:: + :toctree: generated + + CSEPCatalog.event_count + CSEPCatalog.get_magnitudes + CSEPCatalog.get_longitudes + CSEPCatalog.get_latitudes + CSEPCatalog.get_depths + CSEPCatalog.get_epoch_times + CSEPCatalog.get_datetimes + CSEPCatalog.get_cumulative_number_of_events + +Filtering and binning: + +.. autosummary:: + :toctree: generated + + CSEPCatalog.filter + CSEPCatalog.filter_spatial + CSEPCatalog.apply_mct + CSEPCatalog.spatial_counts + CSEPCatalog.magnitude_counts + CSEPCatalog.spatial_magnitude_counts + +Other utilities: + +.. autosummary:: + :toctree: generated + + CSEPCatalog.update_catalog_stats + CSEPCatalog.length_in_seconds + CSEPCatalog.get_bvalue + +.. currentmodule:: csep.core.forecasts +.. automodule:: csep.core.forecasts + +Forecasts +--------- + +PyCSEP provides classes to interact with catalog and grid based Forecasts + +.. autosummary:: + :toctree: generated + + GriddedForecast + CatalogForecast + +Gridded forecast methods: + +.. autosummary:: + :toctree: generated + + GriddedForecast.data + GriddedForecast.event_count + GriddedForecast.sum + GriddedForecast.magnitudes + GriddedForecast.min_magnitude + GriddedForecast.magnitude_counts + GriddedForecast.spatial_counts + GriddedForecast.get_latitudes + GriddedForecast.get_longitudes + GriddedForecast.get_magnitudes + GriddedForecast.get_index_of + GriddedForecast.get_magnitude_index + GriddedForecast.load_ascii + GriddedForecast.from_custom + GriddedForecast.get_rates + GriddedForecast.target_event_rates + GriddedForecast.scale_to_test_date + GriddedForecast.plot + +Catalog forecast methods: + +.. autosummary:: + :toctree: generated + + CatalogForecast.magnitudes + CatalogForecast.min_magnitude + CatalogForecast.spatial_counts + CatalogForecast.magnitude_counts + CatalogForecast.get_expected_rates + CatalogForecast.get_dataframe + CatalogForecast.write_ascii + CatalogForecast.load_ascii + +.. automodule:: csep.core.catalog_evaluations + +Evaluations +----------- + +PyCSEP provides implementations of evaluations for both catalog-based forecasts and grid-based forecasts. + +Catalog-based forecast evaluations: + +.. autosummary:: + :toctree: generated + + number_test + spatial_test + magnitude_test + pseudolikelihood_test + calibration_test + +.. automodule:: csep.core.poisson_evaluations + +Grid-based forecast evaluations: + +.. autosummary:: + :toctree: generated + + number_test + magnitude_test + spatial_test + likelihood_test + conditional_likelihood_test + paired_t_test + w_test + +.. automodule:: csep.core.regions + +Regions +------- + +PyCSEP includes commonly used CSEP testing regions and classes that facilitate working with gridded data sets. This +module is early in development and help is welcome here! + +Region class(es): + +.. autosummary:: + :toctree: generated + + CartesianGrid2D + +Testing regions: + +.. autosummary:: + :toctree: generated + + california_relm_region + italy_csep_region + global_region + +Region utilities: + +.. autosummary:: + :toctree: generated + + magnitude_bins + create_space_magnitude_region + parse_csep_template + increase_grid_resolution + masked_region + generate_aftershock_region + california_relm_region + + +Plotting +-------- + +.. automodule:: csep.utils.plots + +General plotting: + +.. autosummary:: + :toctree: generated + + plot_histogram + plot_ecdf + plot_basemap + plot_spatial_dataset + add_labels_for_publication + +Plotting from catalogs: + +.. autosummary:: + :toctree: generated + + plot_magnitude_versus_time + plot_catalog + +Plotting stochastic event sets and evaluations: + +.. autosummary:: + :toctree: generated + + plot_cumulative_events_versus_time + plot_magnitude_histogram + plot_number_test + plot_magnitude_test + plot_distribution_test + plot_likelihood_test + plot_spatial_test + plot_calibration_test + +Plotting gridded forecasts and evaluations: + +.. autosummary:: + :toctree: generated + + plot_spatial_dataset + plot_comparison_test + plot_poisson_consistency_test + +.. automodule:: csep.utils.time_utils + +Time Utilities +-------------- + +.. autosummary:: + :toctree: generated + + epoch_time_to_utc_datetime + datetime_to_utc_epoch + millis_to_days + days_to_millis + strptime_to_utc_epoch + timedelta_from_years + strptime_to_utc_datetime + utc_now_datetime + utc_now_epoch + create_utc_datetime + decimal_year + +.. automodule:: csep.utils.comcat + +Comcat Access +------------- + +We integrated the code developed by Mike Hearne and others at the USGS to reduce the dependencies of this package. We plan +to move this to an external and optional dependency in the future. + +.. autosummary:: + :toctree: generated + + search + get_event_by_id + +.. automodule:: csep.utils.calc + +Calculation Utilities +--------------------- + +.. autosummary:: + :toctree: generated + + nearest_index + find_nearest + func_inverse + discretize + bin1d_vec + +.. automodule:: csep.utils.stats + +Statistics Utilities +-------------------- + +.. autosummary:: + :toctree: generated + + sup_dist + sup_dist_na + cumulative_square_diff + binned_ecdf + ecdf + greater_equal_ecdf + less_equal_ecdf + min_or_none + max_or_none + get_quantiles + poisson_log_likelihood + poisson_joint_log_likelihood_ndarray + poisson_inverse_cdf + +.. automodule:: csep.utils.basic_types + +Basic types +----------- + +.. autosummary:: + :toctree: generated + + AdaptiveHistogram \ No newline at end of file diff --git a/_sources/reference/developer_notes.rst.txt b/_sources/reference/developer_notes.rst.txt new file mode 100644 index 00000000..23089486 --- /dev/null +++ b/_sources/reference/developer_notes.rst.txt @@ -0,0 +1,47 @@ +Developer Notes +=============== + +Last updated: 25 January 2022 + +Creating a new release of pyCSEP +-------------------------------- + +These are the steps required to create a new release of pyCSEP. This requires a combination of updates to the repository +and Github. You will need to build the wheels for distribution on PyPI and upload them to GitHub to issue a release. +The final step involves uploading the tar-ball of the release to PyPI. CI tools provided by `conda-forge` will automatically +bump the version on `conda-forge`. Note: permissions are required to push new versions to PyPI. + +1. Code changes +*************** +1. Bump the version number in `_version.py `_ +2. Update `codemeta.json `_ +3. Update `CHANGELOG.md `_. Include links to Github pull requests if possible. +4. Update `CREDITS.md `_ if required. +5. Update the version in `conf.py `_. +6. Issue a pull request that contains these changes. +7. Merge pull request when all changes are merged into `master` and versions are correct. + +2. Creating source distribution +******************************* + +Issue these commands from the top-level directory of the project:: + + python setup.py check + +If that executes with no warnings or failures build the source distribution using the command:: + + python setup.py sdist + +This creates a folder called `dist` that contains a file called `pycsep-X.Y.Z.tar.gz`. This is the distribution +that will be uploaded to `PyPI`, `conda-forge`, and Github. + +Upload to PyPI using `twine`. This requires permissions to push to the PyPI account:: + + twine upload dist/pycsep-X.Y.Z.tar.gz + +3. Create release on Github +*************************** +1. Create a new `release `_ on GitHub. This can be saved as a draft until it's ready. +2. Copy new updates information from `CHANGELOG.md `_. +3. Upload tar-ball created from `setup.py`. +4. Publish release. \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalog_evaluations.calibration_test.rst.txt b/_sources/reference/generated/csep.core.catalog_evaluations.calibration_test.rst.txt new file mode 100644 index 00000000..a06bc11c --- /dev/null +++ b/_sources/reference/generated/csep.core.catalog_evaluations.calibration_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalog\_evaluations.calibration\_test +================================================ + +.. currentmodule:: csep.core.catalog_evaluations + +.. autofunction:: calibration_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalog_evaluations.magnitude_test.rst.txt b/_sources/reference/generated/csep.core.catalog_evaluations.magnitude_test.rst.txt new file mode 100644 index 00000000..512c6b78 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalog_evaluations.magnitude_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalog\_evaluations.magnitude\_test +============================================== + +.. currentmodule:: csep.core.catalog_evaluations + +.. autofunction:: magnitude_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalog_evaluations.number_test.rst.txt b/_sources/reference/generated/csep.core.catalog_evaluations.number_test.rst.txt new file mode 100644 index 00000000..d26f7ecf --- /dev/null +++ b/_sources/reference/generated/csep.core.catalog_evaluations.number_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalog\_evaluations.number\_test +=========================================== + +.. currentmodule:: csep.core.catalog_evaluations + +.. autofunction:: number_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalog_evaluations.pseudolikelihood_test.rst.txt b/_sources/reference/generated/csep.core.catalog_evaluations.pseudolikelihood_test.rst.txt new file mode 100644 index 00000000..ea0a4e47 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalog_evaluations.pseudolikelihood_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalog\_evaluations.pseudolikelihood\_test +===================================================== + +.. currentmodule:: csep.core.catalog_evaluations + +.. autofunction:: pseudolikelihood_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalog_evaluations.spatial_test.rst.txt b/_sources/reference/generated/csep.core.catalog_evaluations.spatial_test.rst.txt new file mode 100644 index 00000000..3a2e466f --- /dev/null +++ b/_sources/reference/generated/csep.core.catalog_evaluations.spatial_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalog\_evaluations.spatial\_test +============================================ + +.. currentmodule:: csep.core.catalog_evaluations + +.. autofunction:: spatial_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.AbstractBaseCatalog.rst.txt b/_sources/reference/generated/csep.core.catalogs.AbstractBaseCatalog.rst.txt new file mode 100644 index 00000000..077fa0a5 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.AbstractBaseCatalog.rst.txt @@ -0,0 +1,65 @@ +csep.core.catalogs.AbstractBaseCatalog +====================================== + +.. currentmodule:: csep.core.catalogs + +.. autoclass:: AbstractBaseCatalog + + + .. automethod:: __init__ + + + .. rubric:: Methods + + .. autosummary:: + + ~AbstractBaseCatalog.__init__ + ~AbstractBaseCatalog.apply_mct + ~AbstractBaseCatalog.b_positive + ~AbstractBaseCatalog.filter + ~AbstractBaseCatalog.filter_spatial + ~AbstractBaseCatalog.from_dataframe + ~AbstractBaseCatalog.from_dict + ~AbstractBaseCatalog.get_bbox + ~AbstractBaseCatalog.get_bvalue + ~AbstractBaseCatalog.get_csep_format + ~AbstractBaseCatalog.get_cumulative_number_of_events + ~AbstractBaseCatalog.get_datetimes + ~AbstractBaseCatalog.get_depths + ~AbstractBaseCatalog.get_epoch_times + ~AbstractBaseCatalog.get_event_ids + ~AbstractBaseCatalog.get_latitudes + ~AbstractBaseCatalog.get_longitudes + ~AbstractBaseCatalog.get_mag_idx + ~AbstractBaseCatalog.get_magnitudes + ~AbstractBaseCatalog.get_number_of_events + ~AbstractBaseCatalog.get_spatial_idx + ~AbstractBaseCatalog.length_in_seconds + ~AbstractBaseCatalog.load_catalog + ~AbstractBaseCatalog.load_json + ~AbstractBaseCatalog.magnitude_counts + ~AbstractBaseCatalog.plot + ~AbstractBaseCatalog.spatial_counts + ~AbstractBaseCatalog.spatial_event_probability + ~AbstractBaseCatalog.spatial_magnitude_counts + ~AbstractBaseCatalog.to_dataframe + ~AbstractBaseCatalog.to_dict + ~AbstractBaseCatalog.update_catalog_stats + ~AbstractBaseCatalog.write_ascii + ~AbstractBaseCatalog.write_json + + + + + + .. rubric:: Attributes + + .. autosummary:: + + ~AbstractBaseCatalog.catalog + ~AbstractBaseCatalog.data + ~AbstractBaseCatalog.dtype + ~AbstractBaseCatalog.event_count + ~AbstractBaseCatalog.log + + \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.apply_mct.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.apply_mct.rst.txt new file mode 100644 index 00000000..7ad2f110 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.apply_mct.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.apply\_mct +========================================= + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.apply_mct \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.event_count.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.event_count.rst.txt new file mode 100644 index 00000000..cb62b46c --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.event_count.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.event\_count +=========================================== + +.. currentmodule:: csep.core.catalogs + +.. autoproperty:: CSEPCatalog.event_count \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.filter.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.filter.rst.txt new file mode 100644 index 00000000..47bc71b4 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.filter.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.filter +===================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.filter \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.filter_spatial.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.filter_spatial.rst.txt new file mode 100644 index 00000000..9c2591c0 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.filter_spatial.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.filter\_spatial +============================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.filter_spatial \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.from_dataframe.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.from_dataframe.rst.txt new file mode 100644 index 00000000..18267f02 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.from_dataframe.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.from\_dataframe +============================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.from_dataframe \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.from_dict.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.from_dict.rst.txt new file mode 100644 index 00000000..aeb23639 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.from_dict.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.from\_dict +========================================= + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.from_dict \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_bvalue.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_bvalue.rst.txt new file mode 100644 index 00000000..968636de --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_bvalue.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.get\_bvalue +========================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.get_bvalue \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_csep_format.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_csep_format.rst.txt new file mode 100644 index 00000000..5d3d2bec --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_csep_format.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.get\_csep\_format +================================================ + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.get_csep_format \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events.rst.txt new file mode 100644 index 00000000..5144eba1 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.get\_cumulative\_number\_of\_events +================================================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.get_cumulative_number_of_events \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_datetimes.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_datetimes.rst.txt new file mode 100644 index 00000000..44dd8e6f --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_datetimes.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.get\_datetimes +============================================= + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.get_datetimes \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_depths.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_depths.rst.txt new file mode 100644 index 00000000..dd545848 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_depths.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.get\_depths +========================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.get_depths \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_epoch_times.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_epoch_times.rst.txt new file mode 100644 index 00000000..baf9494b --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_epoch_times.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.get\_epoch\_times +================================================ + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.get_epoch_times \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_latitudes.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_latitudes.rst.txt new file mode 100644 index 00000000..6e19e174 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_latitudes.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.get\_latitudes +============================================= + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.get_latitudes \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_longitudes.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_longitudes.rst.txt new file mode 100644 index 00000000..7ab6da60 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_longitudes.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.get\_longitudes +============================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.get_longitudes \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_magnitudes.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_magnitudes.rst.txt new file mode 100644 index 00000000..3a49fbe9 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.get_magnitudes.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.get\_magnitudes +============================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.get_magnitudes \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.length_in_seconds.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.length_in_seconds.rst.txt new file mode 100644 index 00000000..c2212b1f --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.length_in_seconds.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.length\_in\_seconds +================================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.length_in_seconds \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.load_ascii_catalogs.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.load_ascii_catalogs.rst.txt new file mode 100644 index 00000000..a668ebe3 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.load_ascii_catalogs.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.load\_ascii\_catalogs +==================================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.load_ascii_catalogs \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.load_catalog.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.load_catalog.rst.txt new file mode 100644 index 00000000..607fb36a --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.load_catalog.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.load\_catalog +============================================ + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.load_catalog \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.load_json.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.load_json.rst.txt new file mode 100644 index 00000000..5fcf24dc --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.load_json.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.load\_json +========================================= + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.load_json \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.magnitude_counts.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.magnitude_counts.rst.txt new file mode 100644 index 00000000..276956d8 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.magnitude_counts.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.magnitude\_counts +================================================ + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.magnitude_counts \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.plot.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.plot.rst.txt new file mode 100644 index 00000000..22d3bec2 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.plot.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.plot +=================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.plot \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.rst.txt new file mode 100644 index 00000000..1a7e3feb --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.rst.txt @@ -0,0 +1,66 @@ +csep.core.catalogs.CSEPCatalog +============================== + +.. currentmodule:: csep.core.catalogs + +.. autoclass:: CSEPCatalog + + + .. automethod:: __init__ + + + .. rubric:: Methods + + .. autosummary:: + + ~CSEPCatalog.__init__ + ~CSEPCatalog.apply_mct + ~CSEPCatalog.b_positive + ~CSEPCatalog.filter + ~CSEPCatalog.filter_spatial + ~CSEPCatalog.from_dataframe + ~CSEPCatalog.from_dict + ~CSEPCatalog.get_bbox + ~CSEPCatalog.get_bvalue + ~CSEPCatalog.get_csep_format + ~CSEPCatalog.get_cumulative_number_of_events + ~CSEPCatalog.get_datetimes + ~CSEPCatalog.get_depths + ~CSEPCatalog.get_epoch_times + ~CSEPCatalog.get_event_ids + ~CSEPCatalog.get_latitudes + ~CSEPCatalog.get_longitudes + ~CSEPCatalog.get_mag_idx + ~CSEPCatalog.get_magnitudes + ~CSEPCatalog.get_number_of_events + ~CSEPCatalog.get_spatial_idx + ~CSEPCatalog.length_in_seconds + ~CSEPCatalog.load_ascii_catalogs + ~CSEPCatalog.load_catalog + ~CSEPCatalog.load_json + ~CSEPCatalog.magnitude_counts + ~CSEPCatalog.plot + ~CSEPCatalog.spatial_counts + ~CSEPCatalog.spatial_event_probability + ~CSEPCatalog.spatial_magnitude_counts + ~CSEPCatalog.to_dataframe + ~CSEPCatalog.to_dict + ~CSEPCatalog.update_catalog_stats + ~CSEPCatalog.write_ascii + ~CSEPCatalog.write_json + + + + + + .. rubric:: Attributes + + .. autosummary:: + + ~CSEPCatalog.catalog + ~CSEPCatalog.data + ~CSEPCatalog.dtype + ~CSEPCatalog.event_count + ~CSEPCatalog.log + + \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_counts.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_counts.rst.txt new file mode 100644 index 00000000..5cbf5dc8 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_counts.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.spatial\_counts +============================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.spatial_counts \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts.rst.txt new file mode 100644 index 00000000..bd9ab904 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.spatial\_magnitude\_counts +========================================================= + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.spatial_magnitude_counts \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.to_dataframe.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.to_dataframe.rst.txt new file mode 100644 index 00000000..4d202a14 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.to_dataframe.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.to\_dataframe +============================================ + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.to_dataframe \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.to_dict.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.to_dict.rst.txt new file mode 100644 index 00000000..930c3e00 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.to_dict.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.to\_dict +======================================= + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.to_dict \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.update_catalog_stats.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.update_catalog_stats.rst.txt new file mode 100644 index 00000000..cc1f618a --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.update_catalog_stats.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.update\_catalog\_stats +===================================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.update_catalog_stats \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.write_ascii.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.write_ascii.rst.txt new file mode 100644 index 00000000..56158bba --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.write_ascii.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.write\_ascii +=========================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.write_ascii \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.write_json.rst.txt b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.write_json.rst.txt new file mode 100644 index 00000000..ba56c059 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.CSEPCatalog.write_json.rst.txt @@ -0,0 +1,6 @@ +csep.core.catalogs.CSEPCatalog.write\_json +========================================== + +.. currentmodule:: csep.core.catalogs + +.. automethod:: CSEPCatalog.write_json \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.catalogs.UCERF3Catalog.rst.txt b/_sources/reference/generated/csep.core.catalogs.UCERF3Catalog.rst.txt new file mode 100644 index 00000000..58b162a1 --- /dev/null +++ b/_sources/reference/generated/csep.core.catalogs.UCERF3Catalog.rst.txt @@ -0,0 +1,67 @@ +csep.core.catalogs.UCERF3Catalog +================================ + +.. currentmodule:: csep.core.catalogs + +.. autoclass:: UCERF3Catalog + + + .. automethod:: __init__ + + + .. rubric:: Methods + + .. autosummary:: + + ~UCERF3Catalog.__init__ + ~UCERF3Catalog.apply_mct + ~UCERF3Catalog.b_positive + ~UCERF3Catalog.filter + ~UCERF3Catalog.filter_spatial + ~UCERF3Catalog.from_dataframe + ~UCERF3Catalog.from_dict + ~UCERF3Catalog.get_bbox + ~UCERF3Catalog.get_bvalue + ~UCERF3Catalog.get_csep_format + ~UCERF3Catalog.get_cumulative_number_of_events + ~UCERF3Catalog.get_datetimes + ~UCERF3Catalog.get_depths + ~UCERF3Catalog.get_epoch_times + ~UCERF3Catalog.get_event_ids + ~UCERF3Catalog.get_latitudes + ~UCERF3Catalog.get_longitudes + ~UCERF3Catalog.get_mag_idx + ~UCERF3Catalog.get_magnitudes + ~UCERF3Catalog.get_number_of_events + ~UCERF3Catalog.get_spatial_idx + ~UCERF3Catalog.length_in_seconds + ~UCERF3Catalog.load_catalog + ~UCERF3Catalog.load_catalogs + ~UCERF3Catalog.load_json + ~UCERF3Catalog.magnitude_counts + ~UCERF3Catalog.plot + ~UCERF3Catalog.spatial_counts + ~UCERF3Catalog.spatial_event_probability + ~UCERF3Catalog.spatial_magnitude_counts + ~UCERF3Catalog.to_dataframe + ~UCERF3Catalog.to_dict + ~UCERF3Catalog.update_catalog_stats + ~UCERF3Catalog.write_ascii + ~UCERF3Catalog.write_json + + + + + + .. rubric:: Attributes + + .. autosummary:: + + ~UCERF3Catalog.catalog + ~UCERF3Catalog.data + ~UCERF3Catalog.dtype + ~UCERF3Catalog.event_count + ~UCERF3Catalog.header_dtype + ~UCERF3Catalog.log + + \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.CatalogForecast.get_dataframe.rst.txt b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.get_dataframe.rst.txt new file mode 100644 index 00000000..cfe7258a --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.get_dataframe.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.CatalogForecast.get\_dataframe +================================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: CatalogForecast.get_dataframe \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.CatalogForecast.get_expected_rates.rst.txt b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.get_expected_rates.rst.txt new file mode 100644 index 00000000..fab1ff80 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.get_expected_rates.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.CatalogForecast.get\_expected\_rates +======================================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: CatalogForecast.get_expected_rates \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.CatalogForecast.load_ascii.rst.txt b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.load_ascii.rst.txt new file mode 100644 index 00000000..5dabb8ab --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.load_ascii.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.CatalogForecast.load\_ascii +=============================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: CatalogForecast.load_ascii \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.CatalogForecast.magnitude_counts.rst.txt b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.magnitude_counts.rst.txt new file mode 100644 index 00000000..cfe24586 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.magnitude_counts.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.CatalogForecast.magnitude\_counts +===================================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: CatalogForecast.magnitude_counts \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.CatalogForecast.magnitudes.rst.txt b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.magnitudes.rst.txt new file mode 100644 index 00000000..d4392544 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.magnitudes.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.CatalogForecast.magnitudes +============================================== + +.. currentmodule:: csep.core.forecasts + +.. autoproperty:: CatalogForecast.magnitudes \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.CatalogForecast.min_magnitude.rst.txt b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.min_magnitude.rst.txt new file mode 100644 index 00000000..ecc11263 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.min_magnitude.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.CatalogForecast.min\_magnitude +================================================== + +.. currentmodule:: csep.core.forecasts + +.. autoproperty:: CatalogForecast.min_magnitude \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.CatalogForecast.rst.txt b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.rst.txt new file mode 100644 index 00000000..8b301049 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.rst.txt @@ -0,0 +1,40 @@ +csep.core.forecasts.CatalogForecast +=================================== + +.. currentmodule:: csep.core.forecasts + +.. autoclass:: CatalogForecast + + + .. automethod:: __init__ + + + .. rubric:: Methods + + .. autosummary:: + + ~CatalogForecast.__init__ + ~CatalogForecast.get_dataframe + ~CatalogForecast.get_event_counts + ~CatalogForecast.get_expected_rates + ~CatalogForecast.load_ascii + ~CatalogForecast.magnitude_counts + ~CatalogForecast.plot + ~CatalogForecast.spatial_counts + ~CatalogForecast.write_ascii + + + + + + .. rubric:: Attributes + + .. autosummary:: + + ~CatalogForecast.end_epoch + ~CatalogForecast.log + ~CatalogForecast.magnitudes + ~CatalogForecast.min_magnitude + ~CatalogForecast.start_epoch + + \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.CatalogForecast.spatial_counts.rst.txt b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.spatial_counts.rst.txt new file mode 100644 index 00000000..296811cf --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.spatial_counts.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.CatalogForecast.spatial\_counts +=================================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: CatalogForecast.spatial_counts \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.CatalogForecast.write_ascii.rst.txt b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.write_ascii.rst.txt new file mode 100644 index 00000000..071fd863 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.CatalogForecast.write_ascii.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.CatalogForecast.write\_ascii +================================================ + +.. currentmodule:: csep.core.forecasts + +.. automethod:: CatalogForecast.write_ascii \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.data.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.data.rst.txt new file mode 100644 index 00000000..1778a66f --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.data.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.data +======================================== + +.. currentmodule:: csep.core.forecasts + +.. autoproperty:: GriddedForecast.data \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.event_count.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.event_count.rst.txt new file mode 100644 index 00000000..e3da6d21 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.event_count.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.event\_count +================================================ + +.. currentmodule:: csep.core.forecasts + +.. autoproperty:: GriddedForecast.event_count \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.from_custom.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.from_custom.rst.txt new file mode 100644 index 00000000..7638ed59 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.from_custom.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.from\_custom +================================================ + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.from_custom \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_index_of.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_index_of.rst.txt new file mode 100644 index 00000000..f70d1fca --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_index_of.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.get\_index\_of +================================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.get_index_of \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_latitudes.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_latitudes.rst.txt new file mode 100644 index 00000000..faa16f05 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_latitudes.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.get\_latitudes +================================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.get_latitudes \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_longitudes.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_longitudes.rst.txt new file mode 100644 index 00000000..ef162326 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_longitudes.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.get\_longitudes +=================================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.get_longitudes \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitude_index.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitude_index.rst.txt new file mode 100644 index 00000000..672cb213 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitude_index.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.get\_magnitude\_index +========================================================= + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.get_magnitude_index \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitudes.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitudes.rst.txt new file mode 100644 index 00000000..6ea50395 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitudes.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.get\_magnitudes +=================================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.get_magnitudes \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_rates.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_rates.rst.txt new file mode 100644 index 00000000..23aefbc4 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.get_rates.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.get\_rates +============================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.get_rates \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.load_ascii.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.load_ascii.rst.txt new file mode 100644 index 00000000..f09e41fc --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.load_ascii.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.load\_ascii +=============================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.load_ascii \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.magnitude_counts.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.magnitude_counts.rst.txt new file mode 100644 index 00000000..16a6b97e --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.magnitude_counts.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.magnitude\_counts +===================================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.magnitude_counts \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.magnitudes.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.magnitudes.rst.txt new file mode 100644 index 00000000..8adef0b6 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.magnitudes.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.magnitudes +============================================== + +.. currentmodule:: csep.core.forecasts + +.. autoproperty:: GriddedForecast.magnitudes \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.min_magnitude.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.min_magnitude.rst.txt new file mode 100644 index 00000000..11da2057 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.min_magnitude.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.min\_magnitude +================================================== + +.. currentmodule:: csep.core.forecasts + +.. autoproperty:: GriddedForecast.min_magnitude \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.plot.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.plot.rst.txt new file mode 100644 index 00000000..be416b2b --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.plot.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.plot +======================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.plot \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.rst.txt new file mode 100644 index 00000000..408b70bb --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.rst.txt @@ -0,0 +1,53 @@ +csep.core.forecasts.GriddedForecast +=================================== + +.. currentmodule:: csep.core.forecasts + +.. autoclass:: GriddedForecast + + + .. automethod:: __init__ + + + .. rubric:: Methods + + .. autosummary:: + + ~GriddedForecast.__init__ + ~GriddedForecast.from_custom + ~GriddedForecast.from_dict + ~GriddedForecast.get_index_of + ~GriddedForecast.get_latitudes + ~GriddedForecast.get_longitudes + ~GriddedForecast.get_magnitude_index + ~GriddedForecast.get_magnitudes + ~GriddedForecast.get_rates + ~GriddedForecast.get_valid_midpoints + ~GriddedForecast.load_ascii + ~GriddedForecast.magnitude_counts + ~GriddedForecast.plot + ~GriddedForecast.scale + ~GriddedForecast.scale_to_test_date + ~GriddedForecast.spatial_counts + ~GriddedForecast.sum + ~GriddedForecast.target_event_rates + ~GriddedForecast.to_dict + + + + + + .. rubric:: Attributes + + .. autosummary:: + + ~GriddedForecast.data + ~GriddedForecast.event_count + ~GriddedForecast.log + ~GriddedForecast.magnitudes + ~GriddedForecast.min_magnitude + ~GriddedForecast.num_mag_bins + ~GriddedForecast.num_nodes + ~GriddedForecast.polygons + + \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.scale_to_test_date.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.scale_to_test_date.rst.txt new file mode 100644 index 00000000..89b95dde --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.scale_to_test_date.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.scale\_to\_test\_date +========================================================= + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.scale_to_test_date \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.spatial_counts.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.spatial_counts.rst.txt new file mode 100644 index 00000000..4532f0b0 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.spatial_counts.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.spatial\_counts +=================================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.spatial_counts \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.sum.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.sum.rst.txt new file mode 100644 index 00000000..5b45f573 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.sum.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.sum +======================================= + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.sum \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.forecasts.GriddedForecast.target_event_rates.rst.txt b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.target_event_rates.rst.txt new file mode 100644 index 00000000..0896cbd7 --- /dev/null +++ b/_sources/reference/generated/csep.core.forecasts.GriddedForecast.target_event_rates.rst.txt @@ -0,0 +1,6 @@ +csep.core.forecasts.GriddedForecast.target\_event\_rates +======================================================== + +.. currentmodule:: csep.core.forecasts + +.. automethod:: GriddedForecast.target_event_rates \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.poisson_evaluations.conditional_likelihood_test.rst.txt b/_sources/reference/generated/csep.core.poisson_evaluations.conditional_likelihood_test.rst.txt new file mode 100644 index 00000000..553bed88 --- /dev/null +++ b/_sources/reference/generated/csep.core.poisson_evaluations.conditional_likelihood_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.poisson\_evaluations.conditional\_likelihood\_test +============================================================ + +.. currentmodule:: csep.core.poisson_evaluations + +.. autofunction:: conditional_likelihood_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.poisson_evaluations.likelihood_test.rst.txt b/_sources/reference/generated/csep.core.poisson_evaluations.likelihood_test.rst.txt new file mode 100644 index 00000000..53b892ca --- /dev/null +++ b/_sources/reference/generated/csep.core.poisson_evaluations.likelihood_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.poisson\_evaluations.likelihood\_test +=============================================== + +.. currentmodule:: csep.core.poisson_evaluations + +.. autofunction:: likelihood_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.poisson_evaluations.magnitude_test.rst.txt b/_sources/reference/generated/csep.core.poisson_evaluations.magnitude_test.rst.txt new file mode 100644 index 00000000..abf04590 --- /dev/null +++ b/_sources/reference/generated/csep.core.poisson_evaluations.magnitude_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.poisson\_evaluations.magnitude\_test +============================================== + +.. currentmodule:: csep.core.poisson_evaluations + +.. autofunction:: magnitude_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.poisson_evaluations.number_test.rst.txt b/_sources/reference/generated/csep.core.poisson_evaluations.number_test.rst.txt new file mode 100644 index 00000000..2399cf61 --- /dev/null +++ b/_sources/reference/generated/csep.core.poisson_evaluations.number_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.poisson\_evaluations.number\_test +=========================================== + +.. currentmodule:: csep.core.poisson_evaluations + +.. autofunction:: number_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.poisson_evaluations.paired_t_test.rst.txt b/_sources/reference/generated/csep.core.poisson_evaluations.paired_t_test.rst.txt new file mode 100644 index 00000000..c2b26075 --- /dev/null +++ b/_sources/reference/generated/csep.core.poisson_evaluations.paired_t_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.poisson\_evaluations.paired\_t\_test +============================================== + +.. currentmodule:: csep.core.poisson_evaluations + +.. autofunction:: paired_t_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.poisson_evaluations.spatial_test.rst.txt b/_sources/reference/generated/csep.core.poisson_evaluations.spatial_test.rst.txt new file mode 100644 index 00000000..bfd403ee --- /dev/null +++ b/_sources/reference/generated/csep.core.poisson_evaluations.spatial_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.poisson\_evaluations.spatial\_test +============================================ + +.. currentmodule:: csep.core.poisson_evaluations + +.. autofunction:: spatial_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.poisson_evaluations.w_test.rst.txt b/_sources/reference/generated/csep.core.poisson_evaluations.w_test.rst.txt new file mode 100644 index 00000000..26d296f4 --- /dev/null +++ b/_sources/reference/generated/csep.core.poisson_evaluations.w_test.rst.txt @@ -0,0 +1,6 @@ +csep.core.poisson\_evaluations.w\_test +====================================== + +.. currentmodule:: csep.core.poisson_evaluations + +.. autofunction:: w_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.regions.CartesianGrid2D.rst.txt b/_sources/reference/generated/csep.core.regions.CartesianGrid2D.rst.txt new file mode 100644 index 00000000..d5e47743 --- /dev/null +++ b/_sources/reference/generated/csep.core.regions.CartesianGrid2D.rst.txt @@ -0,0 +1,40 @@ +csep.core.regions.CartesianGrid2D +================================= + +.. currentmodule:: csep.core.regions + +.. autoclass:: CartesianGrid2D + + + .. automethod:: __init__ + + + .. rubric:: Methods + + .. autosummary:: + + ~CartesianGrid2D.__init__ + ~CartesianGrid2D.from_dict + ~CartesianGrid2D.from_origins + ~CartesianGrid2D.get_bbox + ~CartesianGrid2D.get_cartesian + ~CartesianGrid2D.get_cell_area + ~CartesianGrid2D.get_index_of + ~CartesianGrid2D.get_location_of + ~CartesianGrid2D.get_masked + ~CartesianGrid2D.midpoints + ~CartesianGrid2D.origins + ~CartesianGrid2D.tight_bbox + ~CartesianGrid2D.to_dict + + + + + + .. rubric:: Attributes + + .. autosummary:: + + ~CartesianGrid2D.num_nodes + + \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.regions.california_relm_region.rst.txt b/_sources/reference/generated/csep.core.regions.california_relm_region.rst.txt new file mode 100644 index 00000000..3da731b5 --- /dev/null +++ b/_sources/reference/generated/csep.core.regions.california_relm_region.rst.txt @@ -0,0 +1,6 @@ +csep.core.regions.california\_relm\_region +========================================== + +.. currentmodule:: csep.core.regions + +.. autofunction:: california_relm_region \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.regions.create_space_magnitude_region.rst.txt b/_sources/reference/generated/csep.core.regions.create_space_magnitude_region.rst.txt new file mode 100644 index 00000000..2d434d05 --- /dev/null +++ b/_sources/reference/generated/csep.core.regions.create_space_magnitude_region.rst.txt @@ -0,0 +1,6 @@ +csep.core.regions.create\_space\_magnitude\_region +================================================== + +.. currentmodule:: csep.core.regions + +.. autofunction:: create_space_magnitude_region \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.regions.generate_aftershock_region.rst.txt b/_sources/reference/generated/csep.core.regions.generate_aftershock_region.rst.txt new file mode 100644 index 00000000..cb98ab92 --- /dev/null +++ b/_sources/reference/generated/csep.core.regions.generate_aftershock_region.rst.txt @@ -0,0 +1,6 @@ +csep.core.regions.generate\_aftershock\_region +============================================== + +.. currentmodule:: csep.core.regions + +.. autofunction:: generate_aftershock_region \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.regions.global_region.rst.txt b/_sources/reference/generated/csep.core.regions.global_region.rst.txt new file mode 100644 index 00000000..de0ee943 --- /dev/null +++ b/_sources/reference/generated/csep.core.regions.global_region.rst.txt @@ -0,0 +1,6 @@ +csep.core.regions.global\_region +================================ + +.. currentmodule:: csep.core.regions + +.. autofunction:: global_region \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.regions.increase_grid_resolution.rst.txt b/_sources/reference/generated/csep.core.regions.increase_grid_resolution.rst.txt new file mode 100644 index 00000000..e81f8aba --- /dev/null +++ b/_sources/reference/generated/csep.core.regions.increase_grid_resolution.rst.txt @@ -0,0 +1,6 @@ +csep.core.regions.increase\_grid\_resolution +============================================ + +.. currentmodule:: csep.core.regions + +.. autofunction:: increase_grid_resolution \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.regions.italy_csep_region.rst.txt b/_sources/reference/generated/csep.core.regions.italy_csep_region.rst.txt new file mode 100644 index 00000000..6db1458b --- /dev/null +++ b/_sources/reference/generated/csep.core.regions.italy_csep_region.rst.txt @@ -0,0 +1,6 @@ +csep.core.regions.italy\_csep\_region +===================================== + +.. currentmodule:: csep.core.regions + +.. autofunction:: italy_csep_region \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.regions.magnitude_bins.rst.txt b/_sources/reference/generated/csep.core.regions.magnitude_bins.rst.txt new file mode 100644 index 00000000..ef9d45a8 --- /dev/null +++ b/_sources/reference/generated/csep.core.regions.magnitude_bins.rst.txt @@ -0,0 +1,6 @@ +csep.core.regions.magnitude\_bins +================================= + +.. currentmodule:: csep.core.regions + +.. autofunction:: magnitude_bins \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.regions.masked_region.rst.txt b/_sources/reference/generated/csep.core.regions.masked_region.rst.txt new file mode 100644 index 00000000..95cf8fbb --- /dev/null +++ b/_sources/reference/generated/csep.core.regions.masked_region.rst.txt @@ -0,0 +1,6 @@ +csep.core.regions.masked\_region +================================ + +.. currentmodule:: csep.core.regions + +.. autofunction:: masked_region \ No newline at end of file diff --git a/_sources/reference/generated/csep.core.regions.parse_csep_template.rst.txt b/_sources/reference/generated/csep.core.regions.parse_csep_template.rst.txt new file mode 100644 index 00000000..c9de2656 --- /dev/null +++ b/_sources/reference/generated/csep.core.regions.parse_csep_template.rst.txt @@ -0,0 +1,6 @@ +csep.core.regions.parse\_csep\_template +======================================= + +.. currentmodule:: csep.core.regions + +.. autofunction:: parse_csep_template \ No newline at end of file diff --git a/_sources/reference/generated/csep.load_catalog.rst.txt b/_sources/reference/generated/csep.load_catalog.rst.txt new file mode 100644 index 00000000..b10b3e52 --- /dev/null +++ b/_sources/reference/generated/csep.load_catalog.rst.txt @@ -0,0 +1,6 @@ +csep.load\_catalog +================== + +.. currentmodule:: csep + +.. autofunction:: load_catalog \ No newline at end of file diff --git a/_sources/reference/generated/csep.load_catalog_forecast.rst.txt b/_sources/reference/generated/csep.load_catalog_forecast.rst.txt new file mode 100644 index 00000000..e24b661f --- /dev/null +++ b/_sources/reference/generated/csep.load_catalog_forecast.rst.txt @@ -0,0 +1,6 @@ +csep.load\_catalog\_forecast +============================ + +.. currentmodule:: csep + +.. autofunction:: load_catalog_forecast \ No newline at end of file diff --git a/_sources/reference/generated/csep.load_gridded_forecast.rst.txt b/_sources/reference/generated/csep.load_gridded_forecast.rst.txt new file mode 100644 index 00000000..e4f0931e --- /dev/null +++ b/_sources/reference/generated/csep.load_gridded_forecast.rst.txt @@ -0,0 +1,6 @@ +csep.load\_gridded\_forecast +============================ + +.. currentmodule:: csep + +.. autofunction:: load_gridded_forecast \ No newline at end of file diff --git a/_sources/reference/generated/csep.load_stochastic_event_sets.rst.txt b/_sources/reference/generated/csep.load_stochastic_event_sets.rst.txt new file mode 100644 index 00000000..03afc4dc --- /dev/null +++ b/_sources/reference/generated/csep.load_stochastic_event_sets.rst.txt @@ -0,0 +1,6 @@ +csep.load\_stochastic\_event\_sets +================================== + +.. currentmodule:: csep + +.. autofunction:: load_stochastic_event_sets \ No newline at end of file diff --git a/_sources/reference/generated/csep.query_bsi.rst.txt b/_sources/reference/generated/csep.query_bsi.rst.txt new file mode 100644 index 00000000..f212f51f --- /dev/null +++ b/_sources/reference/generated/csep.query_bsi.rst.txt @@ -0,0 +1,6 @@ +csep.query\_bsi +=============== + +.. currentmodule:: csep + +.. autofunction:: query_bsi \ No newline at end of file diff --git a/_sources/reference/generated/csep.query_comcat.rst.txt b/_sources/reference/generated/csep.query_comcat.rst.txt new file mode 100644 index 00000000..759b748b --- /dev/null +++ b/_sources/reference/generated/csep.query_comcat.rst.txt @@ -0,0 +1,6 @@ +csep.query\_comcat +================== + +.. currentmodule:: csep + +.. autofunction:: query_comcat \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.basic_types.AdaptiveHistogram.rst.txt b/_sources/reference/generated/csep.utils.basic_types.AdaptiveHistogram.rst.txt new file mode 100644 index 00000000..9fa00d06 --- /dev/null +++ b/_sources/reference/generated/csep.utils.basic_types.AdaptiveHistogram.rst.txt @@ -0,0 +1,29 @@ +csep.utils.basic\_types.AdaptiveHistogram +========================================= + +.. currentmodule:: csep.utils.basic_types + +.. autoclass:: AdaptiveHistogram + + + .. automethod:: __init__ + + + .. rubric:: Methods + + .. autosummary:: + + ~AdaptiveHistogram.__init__ + ~AdaptiveHistogram.add + + + + + + .. rubric:: Attributes + + .. autosummary:: + + ~AdaptiveHistogram.rec_dh + + \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.calc.bin1d_vec.rst.txt b/_sources/reference/generated/csep.utils.calc.bin1d_vec.rst.txt new file mode 100644 index 00000000..a2566cc2 --- /dev/null +++ b/_sources/reference/generated/csep.utils.calc.bin1d_vec.rst.txt @@ -0,0 +1,6 @@ +csep.utils.calc.bin1d\_vec +========================== + +.. currentmodule:: csep.utils.calc + +.. autofunction:: bin1d_vec \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.calc.discretize.rst.txt b/_sources/reference/generated/csep.utils.calc.discretize.rst.txt new file mode 100644 index 00000000..e5cfc2c5 --- /dev/null +++ b/_sources/reference/generated/csep.utils.calc.discretize.rst.txt @@ -0,0 +1,6 @@ +csep.utils.calc.discretize +========================== + +.. currentmodule:: csep.utils.calc + +.. autofunction:: discretize \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.calc.find_nearest.rst.txt b/_sources/reference/generated/csep.utils.calc.find_nearest.rst.txt new file mode 100644 index 00000000..51111a8a --- /dev/null +++ b/_sources/reference/generated/csep.utils.calc.find_nearest.rst.txt @@ -0,0 +1,6 @@ +csep.utils.calc.find\_nearest +============================= + +.. currentmodule:: csep.utils.calc + +.. autofunction:: find_nearest \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.calc.func_inverse.rst.txt b/_sources/reference/generated/csep.utils.calc.func_inverse.rst.txt new file mode 100644 index 00000000..7c133cd1 --- /dev/null +++ b/_sources/reference/generated/csep.utils.calc.func_inverse.rst.txt @@ -0,0 +1,6 @@ +csep.utils.calc.func\_inverse +============================= + +.. currentmodule:: csep.utils.calc + +.. autofunction:: func_inverse \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.calc.nearest_index.rst.txt b/_sources/reference/generated/csep.utils.calc.nearest_index.rst.txt new file mode 100644 index 00000000..1b440905 --- /dev/null +++ b/_sources/reference/generated/csep.utils.calc.nearest_index.rst.txt @@ -0,0 +1,6 @@ +csep.utils.calc.nearest\_index +============================== + +.. currentmodule:: csep.utils.calc + +.. autofunction:: nearest_index \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.comcat.get_event_by_id.rst.txt b/_sources/reference/generated/csep.utils.comcat.get_event_by_id.rst.txt new file mode 100644 index 00000000..de7b1454 --- /dev/null +++ b/_sources/reference/generated/csep.utils.comcat.get_event_by_id.rst.txt @@ -0,0 +1,6 @@ +csep.utils.comcat.get\_event\_by\_id +==================================== + +.. currentmodule:: csep.utils.comcat + +.. autofunction:: get_event_by_id \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.comcat.search.rst.txt b/_sources/reference/generated/csep.utils.comcat.search.rst.txt new file mode 100644 index 00000000..71766e92 --- /dev/null +++ b/_sources/reference/generated/csep.utils.comcat.search.rst.txt @@ -0,0 +1,6 @@ +csep.utils.comcat.search +======================== + +.. currentmodule:: csep.utils.comcat + +.. autofunction:: search \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.add_labels_for_publication.rst.txt b/_sources/reference/generated/csep.utils.plots.add_labels_for_publication.rst.txt new file mode 100644 index 00000000..e53e04fa --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.add_labels_for_publication.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.add\_labels\_for\_publication +============================================== + +.. currentmodule:: csep.utils.plots + +.. autofunction:: add_labels_for_publication \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_basemap.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_basemap.rst.txt new file mode 100644 index 00000000..871c31f4 --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_basemap.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_basemap +============================== + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_basemap \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_calibration_test.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_calibration_test.rst.txt new file mode 100644 index 00000000..e0481ba1 --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_calibration_test.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_calibration\_test +======================================== + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_calibration_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_catalog.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_catalog.rst.txt new file mode 100644 index 00000000..347e902e --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_catalog.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_catalog +============================== + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_catalog \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_comparison_test.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_comparison_test.rst.txt new file mode 100644 index 00000000..f27d9799 --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_comparison_test.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_comparison\_test +======================================= + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_comparison_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_cumulative_events_versus_time.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_cumulative_events_versus_time.rst.txt new file mode 100644 index 00000000..1d2ecd50 --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_cumulative_events_versus_time.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_cumulative\_events\_versus\_time +======================================================= + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_cumulative_events_versus_time \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_distribution_test.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_distribution_test.rst.txt new file mode 100644 index 00000000..dd8bd73f --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_distribution_test.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_distribution\_test +========================================= + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_distribution_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_ecdf.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_ecdf.rst.txt new file mode 100644 index 00000000..09918868 --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_ecdf.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_ecdf +=========================== + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_ecdf \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_histogram.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_histogram.rst.txt new file mode 100644 index 00000000..e1f1fffe --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_histogram.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_histogram +================================ + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_histogram \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_likelihood_test.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_likelihood_test.rst.txt new file mode 100644 index 00000000..d78154d3 --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_likelihood_test.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_likelihood\_test +======================================= + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_likelihood_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_magnitude_histogram.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_magnitude_histogram.rst.txt new file mode 100644 index 00000000..d820c7ec --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_magnitude_histogram.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_magnitude\_histogram +=========================================== + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_magnitude_histogram \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_magnitude_test.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_magnitude_test.rst.txt new file mode 100644 index 00000000..0f1c1a52 --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_magnitude_test.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_magnitude\_test +====================================== + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_magnitude_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_magnitude_versus_time.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_magnitude_versus_time.rst.txt new file mode 100644 index 00000000..6b70dc8c --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_magnitude_versus_time.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_magnitude\_versus\_time +============================================== + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_magnitude_versus_time \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_number_test.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_number_test.rst.txt new file mode 100644 index 00000000..bd28c121 --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_number_test.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_number\_test +=================================== + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_number_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_poisson_consistency_test.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_poisson_consistency_test.rst.txt new file mode 100644 index 00000000..7fe7f543 --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_poisson_consistency_test.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_poisson\_consistency\_test +================================================= + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_poisson_consistency_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_spatial_dataset.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_spatial_dataset.rst.txt new file mode 100644 index 00000000..4f31eaf9 --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_spatial_dataset.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_spatial\_dataset +======================================= + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_spatial_dataset \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.plots.plot_spatial_test.rst.txt b/_sources/reference/generated/csep.utils.plots.plot_spatial_test.rst.txt new file mode 100644 index 00000000..e46bd1de --- /dev/null +++ b/_sources/reference/generated/csep.utils.plots.plot_spatial_test.rst.txt @@ -0,0 +1,6 @@ +csep.utils.plots.plot\_spatial\_test +==================================== + +.. currentmodule:: csep.utils.plots + +.. autofunction:: plot_spatial_test \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.binned_ecdf.rst.txt b/_sources/reference/generated/csep.utils.stats.binned_ecdf.rst.txt new file mode 100644 index 00000000..25798617 --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.binned_ecdf.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.binned\_ecdf +============================= + +.. currentmodule:: csep.utils.stats + +.. autofunction:: binned_ecdf \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.cumulative_square_diff.rst.txt b/_sources/reference/generated/csep.utils.stats.cumulative_square_diff.rst.txt new file mode 100644 index 00000000..c8471e2c --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.cumulative_square_diff.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.cumulative\_square\_diff +========================================= + +.. currentmodule:: csep.utils.stats + +.. autofunction:: cumulative_square_diff \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.ecdf.rst.txt b/_sources/reference/generated/csep.utils.stats.ecdf.rst.txt new file mode 100644 index 00000000..f73cdc82 --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.ecdf.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.ecdf +===================== + +.. currentmodule:: csep.utils.stats + +.. autofunction:: ecdf \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.get_quantiles.rst.txt b/_sources/reference/generated/csep.utils.stats.get_quantiles.rst.txt new file mode 100644 index 00000000..fdbcd392 --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.get_quantiles.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.get\_quantiles +=============================== + +.. currentmodule:: csep.utils.stats + +.. autofunction:: get_quantiles \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.greater_equal_ecdf.rst.txt b/_sources/reference/generated/csep.utils.stats.greater_equal_ecdf.rst.txt new file mode 100644 index 00000000..3d8968e1 --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.greater_equal_ecdf.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.greater\_equal\_ecdf +===================================== + +.. currentmodule:: csep.utils.stats + +.. autofunction:: greater_equal_ecdf \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.less_equal_ecdf.rst.txt b/_sources/reference/generated/csep.utils.stats.less_equal_ecdf.rst.txt new file mode 100644 index 00000000..5cdae978 --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.less_equal_ecdf.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.less\_equal\_ecdf +================================== + +.. currentmodule:: csep.utils.stats + +.. autofunction:: less_equal_ecdf \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.max_or_none.rst.txt b/_sources/reference/generated/csep.utils.stats.max_or_none.rst.txt new file mode 100644 index 00000000..2db04869 --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.max_or_none.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.max\_or\_none +============================== + +.. currentmodule:: csep.utils.stats + +.. autofunction:: max_or_none \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.min_or_none.rst.txt b/_sources/reference/generated/csep.utils.stats.min_or_none.rst.txt new file mode 100644 index 00000000..91c15f5b --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.min_or_none.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.min\_or\_none +============================== + +.. currentmodule:: csep.utils.stats + +.. autofunction:: min_or_none \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.poisson_inverse_cdf.rst.txt b/_sources/reference/generated/csep.utils.stats.poisson_inverse_cdf.rst.txt new file mode 100644 index 00000000..b3df19cb --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.poisson_inverse_cdf.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.poisson\_inverse\_cdf +====================================== + +.. currentmodule:: csep.utils.stats + +.. autofunction:: poisson_inverse_cdf \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.poisson_joint_log_likelihood_ndarray.rst.txt b/_sources/reference/generated/csep.utils.stats.poisson_joint_log_likelihood_ndarray.rst.txt new file mode 100644 index 00000000..3e1eff31 --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.poisson_joint_log_likelihood_ndarray.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.poisson\_joint\_log\_likelihood\_ndarray +========================================================= + +.. currentmodule:: csep.utils.stats + +.. autofunction:: poisson_joint_log_likelihood_ndarray \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.poisson_log_likelihood.rst.txt b/_sources/reference/generated/csep.utils.stats.poisson_log_likelihood.rst.txt new file mode 100644 index 00000000..e241eee0 --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.poisson_log_likelihood.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.poisson\_log\_likelihood +========================================= + +.. currentmodule:: csep.utils.stats + +.. autofunction:: poisson_log_likelihood \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.sup_dist.rst.txt b/_sources/reference/generated/csep.utils.stats.sup_dist.rst.txt new file mode 100644 index 00000000..12d1c8bf --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.sup_dist.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.sup\_dist +========================== + +.. currentmodule:: csep.utils.stats + +.. autofunction:: sup_dist \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.stats.sup_dist_na.rst.txt b/_sources/reference/generated/csep.utils.stats.sup_dist_na.rst.txt new file mode 100644 index 00000000..3e84ba89 --- /dev/null +++ b/_sources/reference/generated/csep.utils.stats.sup_dist_na.rst.txt @@ -0,0 +1,6 @@ +csep.utils.stats.sup\_dist\_na +============================== + +.. currentmodule:: csep.utils.stats + +.. autofunction:: sup_dist_na \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.time_utils.create_utc_datetime.rst.txt b/_sources/reference/generated/csep.utils.time_utils.create_utc_datetime.rst.txt new file mode 100644 index 00000000..6c58f8d1 --- /dev/null +++ b/_sources/reference/generated/csep.utils.time_utils.create_utc_datetime.rst.txt @@ -0,0 +1,6 @@ +csep.utils.time\_utils.create\_utc\_datetime +============================================ + +.. currentmodule:: csep.utils.time_utils + +.. autofunction:: create_utc_datetime \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.time_utils.datetime_to_utc_epoch.rst.txt b/_sources/reference/generated/csep.utils.time_utils.datetime_to_utc_epoch.rst.txt new file mode 100644 index 00000000..bdb20e94 --- /dev/null +++ b/_sources/reference/generated/csep.utils.time_utils.datetime_to_utc_epoch.rst.txt @@ -0,0 +1,6 @@ +csep.utils.time\_utils.datetime\_to\_utc\_epoch +=============================================== + +.. currentmodule:: csep.utils.time_utils + +.. autofunction:: datetime_to_utc_epoch \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.time_utils.days_to_millis.rst.txt b/_sources/reference/generated/csep.utils.time_utils.days_to_millis.rst.txt new file mode 100644 index 00000000..da495525 --- /dev/null +++ b/_sources/reference/generated/csep.utils.time_utils.days_to_millis.rst.txt @@ -0,0 +1,6 @@ +csep.utils.time\_utils.days\_to\_millis +======================================= + +.. currentmodule:: csep.utils.time_utils + +.. autofunction:: days_to_millis \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.time_utils.decimal_year.rst.txt b/_sources/reference/generated/csep.utils.time_utils.decimal_year.rst.txt new file mode 100644 index 00000000..1daf2e02 --- /dev/null +++ b/_sources/reference/generated/csep.utils.time_utils.decimal_year.rst.txt @@ -0,0 +1,6 @@ +csep.utils.time\_utils.decimal\_year +==================================== + +.. currentmodule:: csep.utils.time_utils + +.. autofunction:: decimal_year \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.time_utils.epoch_time_to_utc_datetime.rst.txt b/_sources/reference/generated/csep.utils.time_utils.epoch_time_to_utc_datetime.rst.txt new file mode 100644 index 00000000..df14de48 --- /dev/null +++ b/_sources/reference/generated/csep.utils.time_utils.epoch_time_to_utc_datetime.rst.txt @@ -0,0 +1,6 @@ +csep.utils.time\_utils.epoch\_time\_to\_utc\_datetime +===================================================== + +.. currentmodule:: csep.utils.time_utils + +.. autofunction:: epoch_time_to_utc_datetime \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.time_utils.millis_to_days.rst.txt b/_sources/reference/generated/csep.utils.time_utils.millis_to_days.rst.txt new file mode 100644 index 00000000..88184498 --- /dev/null +++ b/_sources/reference/generated/csep.utils.time_utils.millis_to_days.rst.txt @@ -0,0 +1,6 @@ +csep.utils.time\_utils.millis\_to\_days +======================================= + +.. currentmodule:: csep.utils.time_utils + +.. autofunction:: millis_to_days \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.time_utils.strptime_to_utc_datetime.rst.txt b/_sources/reference/generated/csep.utils.time_utils.strptime_to_utc_datetime.rst.txt new file mode 100644 index 00000000..a6a1975b --- /dev/null +++ b/_sources/reference/generated/csep.utils.time_utils.strptime_to_utc_datetime.rst.txt @@ -0,0 +1,6 @@ +csep.utils.time\_utils.strptime\_to\_utc\_datetime +================================================== + +.. currentmodule:: csep.utils.time_utils + +.. autofunction:: strptime_to_utc_datetime \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.time_utils.strptime_to_utc_epoch.rst.txt b/_sources/reference/generated/csep.utils.time_utils.strptime_to_utc_epoch.rst.txt new file mode 100644 index 00000000..abd9f3b2 --- /dev/null +++ b/_sources/reference/generated/csep.utils.time_utils.strptime_to_utc_epoch.rst.txt @@ -0,0 +1,6 @@ +csep.utils.time\_utils.strptime\_to\_utc\_epoch +=============================================== + +.. currentmodule:: csep.utils.time_utils + +.. autofunction:: strptime_to_utc_epoch \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.time_utils.timedelta_from_years.rst.txt b/_sources/reference/generated/csep.utils.time_utils.timedelta_from_years.rst.txt new file mode 100644 index 00000000..9257706b --- /dev/null +++ b/_sources/reference/generated/csep.utils.time_utils.timedelta_from_years.rst.txt @@ -0,0 +1,6 @@ +csep.utils.time\_utils.timedelta\_from\_years +============================================= + +.. currentmodule:: csep.utils.time_utils + +.. autofunction:: timedelta_from_years \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.time_utils.utc_now_datetime.rst.txt b/_sources/reference/generated/csep.utils.time_utils.utc_now_datetime.rst.txt new file mode 100644 index 00000000..847857cf --- /dev/null +++ b/_sources/reference/generated/csep.utils.time_utils.utc_now_datetime.rst.txt @@ -0,0 +1,6 @@ +csep.utils.time\_utils.utc\_now\_datetime +========================================= + +.. currentmodule:: csep.utils.time_utils + +.. autofunction:: utc_now_datetime \ No newline at end of file diff --git a/_sources/reference/generated/csep.utils.time_utils.utc_now_epoch.rst.txt b/_sources/reference/generated/csep.utils.time_utils.utc_now_epoch.rst.txt new file mode 100644 index 00000000..3c555e6b --- /dev/null +++ b/_sources/reference/generated/csep.utils.time_utils.utc_now_epoch.rst.txt @@ -0,0 +1,6 @@ +csep.utils.time\_utils.utc\_now\_epoch +====================================== + +.. currentmodule:: csep.utils.time_utils + +.. autofunction:: utc_now_epoch \ No newline at end of file diff --git a/_sources/reference/glossary.rst.txt b/_sources/reference/glossary.rst.txt new file mode 100644 index 00000000..93adccf3 --- /dev/null +++ b/_sources/reference/glossary.rst.txt @@ -0,0 +1,50 @@ +===================== +Terms and Definitions +===================== + +The page contains terms and their definitions (and possible mathematical definitions) that are commonly used throughout the documentation +and CSEP literature. + +.. contents:: Table of Contents + :local: + :depth: 2 + + +.. _earthquake-catalog: + +Earthquake catalog +------------------ +List of earthquakes (either tectonic or non-tectonic) defined through their location in space, origin time of the event, and +their magnitude. + + +.. _earthquake_forecast: + +Earthquake Forecast +------------------- +A probabilistic statement about the occurrence of seismicity that can include information about the magnitude and spatial +location. CSEP supports earthquake forecasts expressed as the expected rate of seismicity in disjoint space and magnitude bins +and as families of synthetic earthquake catalogs. + +.. _stochastic-event-set: + +Stochastic Event Set +-------------------- +Collection of synthetic earthquakes (events) that are produced by an earthquake. +A *stochastic event set* consists of *N* events that represent a continuous representation of seismicity that can sample +the uncertainty present within in the forecasting model. + +.. _time-dependent-forecast: + +Time-dependent Forecast +----------------------- +The forecast changes over time using new information not available at the time the forecast was issued. For example, +epidemic-type aftershock sequence models (ETAS) models can utilize updated using newly observed seismicity to produce +new forecasts consistent with the model. + +.. _time-independent-forecast: + +Time-independent Forecast +------------------------- +The forecast does not change with time. Time-independent forecasts are generally used for long-term forecasts +needed for probabalistic seismic hazard analysis. diff --git a/_sources/reference/publications.rst.txt b/_sources/reference/publications.rst.txt new file mode 100644 index 00000000..e14fd2bd --- /dev/null +++ b/_sources/reference/publications.rst.txt @@ -0,0 +1,42 @@ +####################### +Referenced Publications +####################### + + + +.. _helmstetter-2006: + +Helmstetter, A., Y. Y. Kagan, and D. D. Jackson (2006). Comparison of short-term and time-independent earthquake +forecast models for southern California, *Bulletin of the Seismological Society of America* **96** 90-106. + +.. _rhoades-2011: + +Rhoades, D. A., D. Schorlemmer, M. C. Gerstenberger, A. Christophersen, J. D. Zechar, and M. Imoto (2011). Efficient +testing of earthquake forecasting models, *Acta Geophys* **59** 728-747. + +.. _savran-2020: + +Savran, W., M. J. Werner, W. Marzocchi, D. Rhoades, D. D. Jackson, K. R. Milner, E. H. Field, and A. J. Michael (2020). +Pseudoprospective evaluation of UCERF3-ETAS forecasts during the 2019 Ridgecrest Sequence, +*Bulletin of the Seismological Society of America*. + +.. _schorlemmer-2007: + +Schorlemmer, D., M. Gerstenberger, S. Wiemer, D. D. Jackson, and D. A. Rhoades (2007). Earthquake likelihood model +testing, *Seismological Research Letters* **78** 17-29. + +.. _werner-2011: + +Werner, M. J., A. Helmstetter, D. D. Jackson, and Y. Y. Kagan (2011). High-Resolution Long-Term and Short-Term +Earthquake Forecasts for California, *Bulletin of the Seismological Society of America* **101** 1630-1648. + +.. _zechar-2010: + +Zechar, J. D., M. C. Gerstenberger, and D. A. Rhoades (2010). Likelihood-Based Tests for Evaluating Space-Rate-Magnitude +Earthquake Forecasts, *Bulletin of the Seismological Society of America* **100** 1184-1195. + + + + + + diff --git a/_sources/reference/roadmap.rst.txt b/_sources/reference/roadmap.rst.txt new file mode 100644 index 00000000..99228cfe --- /dev/null +++ b/_sources/reference/roadmap.rst.txt @@ -0,0 +1,19 @@ +.. _roadmap: + +################### +Development Roadmap +################### + +This page contains expected changes for new releases of `pyCSEP`. +Last updated 3 November 2021. + +v0.6.0 +====== + +1. Include receiver operating characteristic (ROC) curve +2. Kagan I1 score +3. Add function to plot spatial log-likelihood scores +4. Add documentation section to explain maths of CSEP tests + + + diff --git a/_sources/tutorials/catalog_filtering.rst.txt b/_sources/tutorials/catalog_filtering.rst.txt new file mode 100644 index 00000000..2c7ec344 --- /dev/null +++ b/_sources/tutorials/catalog_filtering.rst.txt @@ -0,0 +1,311 @@ + +.. DO NOT EDIT. +.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. +.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: +.. "tutorials/catalog_filtering.py" +.. LINE NUMBERS ARE GIVEN BELOW. + +.. only:: html + + .. note:: + :class: sphx-glr-download-link-note + + :ref:`Go to the end ` + to download the full example code. + +.. rst-class:: sphx-glr-example-title + +.. _sphx_glr_tutorials_catalog_filtering.py: + + +.. tutorial-catalog-filtering + +Catalogs operations +=================== + +This example demonstrates how to perform standard operations on a catalog. This example requires an internet +connection to access ComCat. + +Overview: + 1. Load catalog from ComCat + 2. Create filtering parameters in space, magnitude, and time + 3. Filter catalog using desired filters + 4. Write catalog to standard CSEP format + +.. GENERATED FROM PYTHON SOURCE LINES 18-23 + +Load required libraries +----------------------- + +Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +:mod:`csep.utils` subpackage. + +.. GENERATED FROM PYTHON SOURCE LINES 23-29 + +.. code-block:: Python + + + import csep + from csep.core import regions + from csep.utils import time_utils, comcat + # sphinx_gallery_thumbnail_path = '_static/CSEP2_Logo_CMYK.png' + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 30-36 + +Load catalog +------------ + +PyCSEP provides access to the ComCat web API using :func:`csep.query_comcat` and to the Bollettino Sismico Italiano +API using :func:`csep.query_bsi`. These functions require a :class:`datetime.datetime` to specify the start and end +dates. + +.. GENERATED FROM PYTHON SOURCE LINES 36-42 + +.. code-block:: Python + + + start_time = csep.utils.time_utils.strptime_to_utc_datetime('2019-01-01 00:00:00.0') + end_time = csep.utils.time_utils.utc_now_datetime() + catalog = csep.query_comcat(start_time, end_time) + print(catalog) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Fetched ComCat catalog in 15.215977907180786 seconds. + + Downloaded catalog from ComCat with following parameters + Start Date: 2019-01-01 12:01:46.950000+00:00 + End Date: 2024-10-21 13:23:13.650000+00:00 + Min Latitude: 31.5008 and Max Latitude: 42.8543333333333 + Min Longitude: -125.3975 and Max Longitude: -113.1001667 + Min Magnitude: 2.5 + Found 11363 events in the ComCat catalog. + + Name: None + + Start Date: 2019-01-01 12:01:46.950000+00:00 + End Date: 2024-10-21 13:23:13.650000+00:00 + + Latitude: (31.5008, 42.8543333333333) + Longitude: (-125.3975, -113.1001667) + + Min Mw: 2.5 + Max Mw: 7.1 + + Event Count: 11363 + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 43-49 + +Filter to magnitude range +------------------------- + +Use the :meth:`csep.core.catalogs.AbstractBaseCatalog.filter` to filter the catalog. The filter function uses the field +names stored in the numpy structured array. Standard fieldnames include 'magnitude', 'origin_time', 'latitude', 'longitude', +and 'depth'. + +.. GENERATED FROM PYTHON SOURCE LINES 49-53 + +.. code-block:: Python + + + catalog = catalog.filter('magnitude >= 3.5') + print(catalog) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + + Name: None + + Start Date: 2019-01-13 09:35:49.870000+00:00 + End Date: 2024-10-21 00:32:11.130000+00:00 + + Latitude: (31.5018, 42.7775) + Longitude: (-125.3868333, -113.1191667) + + Min Mw: 3.5 + Max Mw: 7.1 + + Event Count: 1371 + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 54-62 + +Filter to desired time interval +------------------------------- + +We need to define desired start and end times for the catalog using a time-string format. PyCSEP uses integer times for doing +time manipulations. Time strings can be converted into integer times using +:func:`csep.utils.time_utils.strptime_to_utc_epoch`. The :meth:`csep.core.catalog.AbstractBaseCatalog.filter` also +accepts a list of strings to apply multiple filters. Note: The number of events may differ if this script is ran +at a later date than shown in this example. + +.. GENERATED FROM PYTHON SOURCE LINES 62-72 + +.. code-block:: Python + + + # create epoch times from time-string formats + start_epoch = csep.utils.time_utils.strptime_to_utc_epoch('2019-07-06 03:19:54.040000') + end_epoch = csep.utils.time_utils.strptime_to_utc_epoch('2019-09-21 03:19:54.040000') + + # filter catalog to magnitude ranges and times + filters = [f'origin_time >= {start_epoch}', f'origin_time < {end_epoch}'] + catalog = catalog.filter(filters) + print(catalog) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + + Name: None + + Start Date: 2019-07-06 03:20:36.080000+00:00 + End Date: 2019-09-19 09:59:46.580000+00:00 + + Latitude: (32.2998352, 41.1244) + Longitude: (-125.0241667, -115.3243332) + + Min Mw: 3.5 + Max Mw: 5.5 + + Event Count: 356 + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 73-82 + +Filter to desired spatial region +-------------------------------- + +We use a circular spatial region with a radius of 3 average fault lengths as defined by the Wells and Coppersmith scaling +relationship. PyCSEP provides :func:`csep.utils.spatial.generate_aftershock_region` to create an aftershock region +based on the magnitude and epicenter of an event. + +We use :func:`csep.utils.comcat.get_event_by_id` the ComCat API provided by the USGS to obtain the event information +from the M7.1 Ridgecrest mainshock. + +.. GENERATED FROM PYTHON SOURCE LINES 82-95 + +.. code-block:: Python + + + m71_event_id = 'ci38457511' + event = comcat.get_event_by_id(m71_event_id) + m71_epoch = time_utils.datetime_to_utc_epoch(event.time) + + # build aftershock region + aftershock_region = regions.generate_aftershock_region(event.magnitude, event.longitude, event.latitude) + + # apply new aftershock region and magnitude of completeness + catalog = catalog.filter_spatial(aftershock_region).apply_mct(event.magnitude, m71_epoch) + print(catalog) + + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + + Name: None + + Start Date: 2019-07-06 03:22:35.630000+00:00 + End Date: 2019-09-08 14:07:23.350000+00:00 + + Latitude: (35.448, 36.1823333) + Longitude: (-117.8875, -117.2788333) + + Min Mw: 3.5 + Max Mw: 5.5 + + Event Count: 234 + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 96-100 + +Write catalog +------------- + +Use :meth:`csep.core.catalogs.AbstractBaseCatalog.write_ascii` to write the catalog into the comma separated value format. + +.. GENERATED FROM PYTHON SOURCE LINES 100-101 + +.. code-block:: Python + + catalog.write_ascii('2019-11-11-comcat.csv') + + + + + + + + +.. rst-class:: sphx-glr-timing + + **Total running time of the script:** (0 minutes 18.551 seconds) + + +.. _sphx_glr_download_tutorials_catalog_filtering.py: + +.. only:: html + + .. container:: sphx-glr-footer sphx-glr-footer-example + + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: catalog_filtering.ipynb ` + + .. container:: sphx-glr-download sphx-glr-download-python + + :download:`Download Python source code: catalog_filtering.py ` + + .. container:: sphx-glr-download sphx-glr-download-zip + + :download:`Download zipped: catalog_filtering.zip ` + + +.. only:: html + + .. rst-class:: sphx-glr-signature + + `Gallery generated by Sphinx-Gallery `_ diff --git a/_sources/tutorials/catalog_forecast_evaluation.rst.txt b/_sources/tutorials/catalog_forecast_evaluation.rst.txt new file mode 100644 index 00000000..816f7e5b --- /dev/null +++ b/_sources/tutorials/catalog_forecast_evaluation.rst.txt @@ -0,0 +1,335 @@ + +.. DO NOT EDIT. +.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. +.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: +.. "tutorials/catalog_forecast_evaluation.py" +.. LINE NUMBERS ARE GIVEN BELOW. + +.. only:: html + + .. note:: + :class: sphx-glr-download-link-note + + :ref:`Go to the end ` + to download the full example code. + +.. rst-class:: sphx-glr-example-title + +.. _sphx_glr_tutorials_catalog_forecast_evaluation.py: + + +.. _catalog-forecast-evaluation: + +Catalog-based Forecast Evaluation +================================= + +This example shows how to evaluate a catalog-based forecasting using the Number test. This test is the simplest of the +evaluations. + +Overview: + 1. Define forecast properties (time horizon, spatial region, etc). + 2. Access catalog from ComCat + 3. Filter catalog to be consistent with the forecast properties + 4. Apply catalog-based number test to catalog + 5. Visualize results for catalog-based forecast + +.. GENERATED FROM PYTHON SOURCE LINES 19-24 + +Load required libraries +----------------------- + +Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +:mod:`csep.utils` subpackage. + +.. GENERATED FROM PYTHON SOURCE LINES 24-29 + +.. code-block:: Python + + + import csep + from csep.core import regions, catalog_evaluations + from csep.utils import datasets, time_utils + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 30-35 + +Define start and end times of forecast +-------------------------------------- + +Forecasts should define a time horizon in which they are valid. The choice is flexible for catalog-based forecasts, because +the catalogs can be filtered to accommodate multiple end-times. Conceptually, these should be separate forecasts. + +.. GENERATED FROM PYTHON SOURCE LINES 35-39 + +.. code-block:: Python + + + start_time = time_utils.strptime_to_utc_datetime("1992-06-28 11:57:35.0") + end_time = time_utils.strptime_to_utc_datetime("1992-07-28 11:57:35.0") + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 40-47 + +Define spatial and magnitude regions +------------------------------------ + +Before we can compute the bin-wise rates we need to define a spatial region and a set of magnitude bin edges. The magnitude +bin edges # are the lower bound (inclusive) except for the last bin, which is treated as extending to infinity. We can +bind these # to the forecast object. This can also be done by passing them as keyword arguments +into :func:`csep.load_catalog_forecast`. + +.. GENERATED FROM PYTHON SOURCE LINES 47-60 + +.. code-block:: Python + + + # Magnitude bins properties + min_mw = 4.95 + max_mw = 8.95 + dmw = 0.1 + + # Create space and magnitude regions. The forecast is already filtered in space and magnitude + magnitudes = regions.magnitude_bins(min_mw, max_mw, dmw) + region = regions.california_relm_region() + + # Bind region information to the forecast (this will be used for binning of the catalogs) + space_magnitude_region = regions.create_space_magnitude_region(region, magnitudes) + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 61-72 + +Load catalog forecast +--------------------- + +To reduce the file size of this example, we've already filtered the catalogs to the appropriate magnitudes and +spatial locations. The original forecast was computed for 1 year following the start date, so we still need to filter the +catalog in time. We can do this by passing a list of filtering arguments to the forecast or updating the class. + +By default, the forecast loads catalogs on-demand, so the filters are applied as the catalog loads. On-demand means that +until we loop over the forecast in some capacity, none of the catalogs are actually loaded. + +More fine-grain control and optimizations can be achieved by creating a :class:`csep.core.forecasts.CatalogForecast` directly. + +.. GENERATED FROM PYTHON SOURCE LINES 72-81 + +.. code-block:: Python + + + forecast = csep.load_catalog_forecast(datasets.ucerf3_ascii_format_landers_fname, + start_time = start_time, end_time = end_time, + region = space_magnitude_region, + apply_filters = True) + + # Assign filters to forecast + forecast.filters = [f'origin_time >= {forecast.start_epoch}', f'origin_time < {forecast.end_epoch}'] + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 82-91 + +Obtain evaluation catalog from ComCat +------------------------------------- + +The :class:`csep.core.forecasts.CatalogForecast` provides a method to compute the expected number of events in spatial cells. This +requires a region with magnitude information. + +We need to filter the ComCat catalog to be consistent with the forecast. This can be done either through the ComCat API +or using catalog filtering strings. Here we'll use the ComCat API to make the data access quicker for this example. We +still need to filter the observed catalog in space though. + +.. GENERATED FROM PYTHON SOURCE LINES 91-102 + +.. code-block:: Python + + + # Obtain Comcat catalog and filter to region. + comcat_catalog = csep.query_comcat(start_time, end_time, min_magnitude=forecast.min_magnitude) + + # Filter observed catalog using the same region as the forecast + comcat_catalog = comcat_catalog.filter_spatial(forecast.region) + print(comcat_catalog) + + # Plot the catalog + comcat_catalog.plot() + + + + +.. image-sg:: /tutorials/images/sphx_glr_catalog_forecast_evaluation_001.png + :alt: catalog forecast evaluation + :srcset: /tutorials/images/sphx_glr_catalog_forecast_evaluation_001.png + :class: sphx-glr-single-img + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Fetched ComCat catalog in 0.31020069122314453 seconds. + + Downloaded catalog from ComCat with following parameters + Start Date: 1992-06-28 12:00:45+00:00 + End Date: 1992-07-24 18:14:36.250000+00:00 + Min Latitude: 33.901 and Max Latitude: 36.705 + Min Longitude: -118.067 and Max Longitude: -116.285 + Min Magnitude: 4.95 + Found 19 events in the ComCat catalog. + + Name: None + + Start Date: 1992-06-28 12:00:45+00:00 + End Date: 1992-07-24 18:14:36.250000+00:00 + + Latitude: (33.901, 36.705) + Longitude: (-118.067, -116.285) + + Min Mw: 4.95 + Max Mw: 6.3 + + Event Count: 19 + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 103-107 + +Perform number test +------------------- + +We can perform the Number test on the catalog based forecast using the observed catalog we obtained from Comcat. + +.. GENERATED FROM PYTHON SOURCE LINES 107-110 + +.. code-block:: Python + + + number_test_result = catalog_evaluations.number_test(forecast, comcat_catalog) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Processed 1 catalogs in 0.0012161731719970703 seconds + Processed 2 catalogs in 0.0016498565673828125 seconds + Processed 3 catalogs in 0.0020482540130615234 seconds + Processed 4 catalogs in 0.0023813247680664062 seconds + Processed 5 catalogs in 0.002633333206176758 seconds + Processed 6 catalogs in 0.0029969215393066406 seconds + Processed 7 catalogs in 0.003269672393798828 seconds + Processed 8 catalogs in 0.003643035888671875 seconds + Processed 9 catalogs in 0.004563808441162109 seconds + Processed 10 catalogs in 0.004910945892333984 seconds + Processed 20 catalogs in 0.008572578430175781 seconds + Processed 30 catalogs in 0.01243734359741211 seconds + Processed 40 catalogs in 0.017767667770385742 seconds + Processed 50 catalogs in 0.022856950759887695 seconds + Processed 60 catalogs in 0.026311874389648438 seconds + Processed 70 catalogs in 0.029581785202026367 seconds + Processed 80 catalogs in 0.03316640853881836 seconds + Processed 90 catalogs in 0.036905527114868164 seconds + Processed 100 catalogs in 0.04017376899719238 seconds + Processed 200 catalogs in 0.07233262062072754 seconds + Processed 300 catalogs in 0.10814189910888672 seconds + Processed 400 catalogs in 0.14277887344360352 seconds + Processed 500 catalogs in 0.20580387115478516 seconds + Processed 600 catalogs in 0.23948097229003906 seconds + Processed 700 catalogs in 0.27393269538879395 seconds + Processed 800 catalogs in 0.34024786949157715 seconds + Processed 900 catalogs in 0.3736095428466797 seconds + Processed 1000 catalogs in 0.40767502784729004 seconds + Processed 2000 catalogs in 0.8836994171142578 seconds + Processed 3000 catalogs in 1.3175408840179443 seconds + Processed 4000 catalogs in 1.7736868858337402 seconds + Processed 5000 catalogs in 2.218679904937744 seconds + Processed 6000 catalogs in 2.6744518280029297 seconds + Processed 7000 catalogs in 3.1464571952819824 seconds + Processed 8000 catalogs in 3.5643150806427 seconds + Processed 9000 catalogs in 4.060769557952881 seconds + Processed 10000 catalogs in 4.497692346572876 seconds + + + + +.. GENERATED FROM PYTHON SOURCE LINES 111-115 + +Plot number test result +----------------------- + +We can create a simple visualization of the number test from the evaluation result class. + +.. GENERATED FROM PYTHON SOURCE LINES 115-116 + +.. code-block:: Python + + + ax = number_test_result.plot(show=True) + + +.. image-sg:: /tutorials/images/sphx_glr_catalog_forecast_evaluation_002.png + :alt: Number Test + :srcset: /tutorials/images/sphx_glr_catalog_forecast_evaluation_002.png + :class: sphx-glr-single-img + + + + + + +.. rst-class:: sphx-glr-timing + + **Total running time of the script:** (0 minutes 10.215 seconds) + + +.. _sphx_glr_download_tutorials_catalog_forecast_evaluation.py: + +.. only:: html + + .. container:: sphx-glr-footer sphx-glr-footer-example + + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: catalog_forecast_evaluation.ipynb ` + + .. container:: sphx-glr-download sphx-glr-download-python + + :download:`Download Python source code: catalog_forecast_evaluation.py ` + + .. container:: sphx-glr-download sphx-glr-download-zip + + :download:`Download zipped: catalog_forecast_evaluation.zip ` + + +.. only:: html + + .. rst-class:: sphx-glr-signature + + `Gallery generated by Sphinx-Gallery `_ diff --git a/_sources/tutorials/gridded_forecast_evaluation.rst.txt b/_sources/tutorials/gridded_forecast_evaluation.rst.txt new file mode 100644 index 00000000..1aa76302 --- /dev/null +++ b/_sources/tutorials/gridded_forecast_evaluation.rst.txt @@ -0,0 +1,446 @@ + +.. DO NOT EDIT. +.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. +.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: +.. "tutorials/gridded_forecast_evaluation.py" +.. LINE NUMBERS ARE GIVEN BELOW. + +.. only:: html + + .. note:: + :class: sphx-glr-download-link-note + + :ref:`Go to the end ` + to download the full example code. + +.. rst-class:: sphx-glr-example-title + +.. _sphx_glr_tutorials_gridded_forecast_evaluation.py: + + +.. _grid-forecast-evaluation: + +Grid-based Forecast Evaluation +============================== + +This example demonstrates how to evaluate a grid-based and time-independent forecast. Grid-based +forecasts assume the variability of the forecasts is Poissonian. Therefore, Poisson-based evaluations +should be used to evaluate grid-based forecasts. + +Overview: + 1. Define forecast properties (time horizon, spatial region, etc). + 2. Obtain evaluation catalog + 3. Apply Poissonian evaluations for grid-based forecasts + 4. Store evaluation results using JSON format + 5. Visualize evaluation results + +.. GENERATED FROM PYTHON SOURCE LINES 21-26 + +Load required libraries +----------------------- + +Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +:mod:`csep.utils` subpackage. + +.. GENERATED FROM PYTHON SOURCE LINES 26-34 + +.. code-block:: Python + + + import csep + from csep.core import poisson_evaluations as poisson + from csep.utils import datasets, time_utils, plots + + # Needed to show plots from the terminal + import matplotlib.pyplot as plt + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 35-41 + +Define forecast properties +-------------------------- + +We choose a :ref:`time-independent-forecast` to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note, +the start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts +because they can be rescale to any arbitrary time period. + +.. GENERATED FROM PYTHON SOURCE LINES 41-46 + +.. code-block:: Python + + from csep.utils.stats import get_Kagan_I1_score + + start_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0') + end_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0') + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 47-52 + +Load forecast +------------- + +For this example, we provide the example forecast data set along with the main repository. The filepath is relative +to the root directory of the package. You can specify any file location for your forecasts. + +.. GENERATED FROM PYTHON SOURCE LINES 52-58 + +.. code-block:: Python + + + forecast = csep.load_gridded_forecast(datasets.helmstetter_aftershock_fname, + start_date=start_date, + end_date=end_date, + name='helmstetter_aftershock') + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 59-65 + +Load evaluation catalog +----------------------- + +We will download the evaluation catalog from ComCat (this step requires an internet connection). We can use the ComCat API +to filter the catalog in both time and magnitude. See the catalog filtering example, for more information on how to +filter the catalog in space and time manually. + +.. GENERATED FROM PYTHON SOURCE LINES 65-70 + +.. code-block:: Python + + + print("Querying comcat catalog") + catalog = csep.query_comcat(forecast.start_time, forecast.end_time, min_magnitude=forecast.min_magnitude) + print(catalog) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Querying comcat catalog + Fetched ComCat catalog in 6.506331920623779 seconds. + + Downloaded catalog from ComCat with following parameters + Start Date: 2007-02-26 12:19:54.530000+00:00 + End Date: 2011-02-18 17:47:35.770000+00:00 + Min Latitude: 31.9788333 and Max Latitude: 41.1444 + Min Longitude: -125.0161667 and Max Longitude: -114.8398 + Min Magnitude: 4.96 + Found 34 events in the ComCat catalog. + + Name: None + + Start Date: 2007-02-26 12:19:54.530000+00:00 + End Date: 2011-02-18 17:47:35.770000+00:00 + + Latitude: (31.9788333, 41.1444) + Longitude: (-125.0161667, -114.8398) + + Min Mw: 4.96 + Max Mw: 7.2 + + Event Count: 34 + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 71-75 + +Filter evaluation catalog in space +---------------------------------- + +We need to remove events in the evaluation catalog outside the valid region specified by the forecast. + +.. GENERATED FROM PYTHON SOURCE LINES 75-79 + +.. code-block:: Python + + + catalog = catalog.filter_spatial(forecast.region) + print(catalog) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + + Name: None + + Start Date: 2007-02-26 12:19:54.530000+00:00 + End Date: 2011-02-18 17:47:35.770000+00:00 + + Latitude: (31.9788333, 41.1155) + Longitude: (-125.0161667, -115.0481667) + + Min Mw: 4.96 + Max Mw: 7.2 + + Event Count: 32 + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 80-86 + +Compute Poisson spatial test +---------------------------- + +Simply call the :func:`csep.core.poisson_evaluations.spatial_test` function to evaluate the forecast using the specified +evaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose +option prints the status of the simulations to the standard output. + +.. GENERATED FROM PYTHON SOURCE LINES 86-89 + +.. code-block:: Python + + + spatial_test_result = poisson.spatial_test(forecast, catalog) + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 90-95 + +Store evaluation results +------------------------ + +PyCSEP provides easy ways of storing objects to a JSON format using :func:`csep.write_json`. The evaluations can be read +back into the program for plotting using :func:`csep.load_evaluation_result`. + +.. GENERATED FROM PYTHON SOURCE LINES 95-98 + +.. code-block:: Python + + + csep.write_json(spatial_test_result, 'example_spatial_test.json') + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 99-104 + +Plot spatial test results +------------------------- + +We provide the function :func:`csep.utils.plotting.plot_poisson_consistency_test` to visualize the evaluation results from +consistency tests. + +.. GENERATED FROM PYTHON SOURCE LINES 104-109 + +.. code-block:: Python + + + ax = plots.plot_poisson_consistency_test(spatial_test_result, + plot_args={'xlabel': 'Spatial likelihood'}) + plt.show() + + + + +.. image-sg:: /tutorials/images/sphx_glr_gridded_forecast_evaluation_001.png + :alt: Poisson S-Test + :srcset: /tutorials/images/sphx_glr_gridded_forecast_evaluation_001.png + :class: sphx-glr-single-img + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 110-123 + +Plot ROC Curves +--------------- + +We can also plot the Receiver operating characteristic (ROC) Curves based on forecast and testing-catalog. +In the figure below, True Positive Rate is the normalized cumulative forecast rate, after sorting cells in decreasing order of rate. +The “False Positive Rate” is the normalized cumulative area. +The dashed line is the ROC curve for a uniform forecast, meaning the likelihood for an earthquake to occur at any position is the same. +The further the ROC curve of a forecast is to the uniform forecast, the specific the forecast is. +When comparing the forecast ROC curve against a catalog, one can evaluate if the forecast is more or less specific (or smooth) at different level or seismic rate. + +Note: This figure just shows an example of plotting an ROC curve with a catalog forecast. + If "linear=True" the diagram is represented using a linear x-axis. + If "linear=False" the diagram is represented using a logarithmic x-axis. + +.. GENERATED FROM PYTHON SOURCE LINES 123-131 + +.. code-block:: Python + + + + print("Plotting concentration ROC curve") + _= plots.plot_concentration_ROC_diagram(forecast, catalog, linear=True) + + + + + + + +.. image-sg:: /tutorials/images/sphx_glr_gridded_forecast_evaluation_002.png + :alt: Concentration ROC Curve + :srcset: /tutorials/images/sphx_glr_gridded_forecast_evaluation_002.png + :class: sphx-glr-single-img + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Plotting concentration ROC curve + + + + +.. GENERATED FROM PYTHON SOURCE LINES 132-139 + +Plot ROC and Molchan curves using the alarm-based approach + ----------------------- +In this script, we generate ROC diagrams and Molchan diagrams using the alarm-based approach to evaluate the predictive +performance of models. This method exploits contingency table analysis to evaluate the predictive capabilities of +forecasting models. By analysing the contingency table data, we determine the ROC curve and Molchan trajectory and +estimate the Area Skill Score to assess the accuracy and reliability of the prediction models. The generated graphs +visually represent the prediction performance. + +.. GENERATED FROM PYTHON SOURCE LINES 139-154 + +.. code-block:: Python + + + # Note: If "linear=True" the diagram is represented using a linear x-axis. + # If "linear=False" the diagram is represented using a logarithmic x-axis. + + print("Plotting ROC curve from the contingency table") + # Set linear True to obtain a linear x-axis, False to obtain a logical x-axis. + _ = plots.plot_ROC_diagram(forecast, catalog, linear=True) + + print("Plotting Molchan curve from the contingency table and the Area Skill Score") + # Set linear True to obtain a linear x-axis, False to obtain a logical x-axis. + _ = plots.plot_Molchan_diagram(forecast, catalog, linear=True) + + + + + + + +.. rst-class:: sphx-glr-horizontal + + + * + + .. image-sg:: /tutorials/images/sphx_glr_gridded_forecast_evaluation_003.png + :alt: ROC Curve from contingency table + :srcset: /tutorials/images/sphx_glr_gridded_forecast_evaluation_003.png + :class: sphx-glr-multi-img + + * + + .. image-sg:: /tutorials/images/sphx_glr_gridded_forecast_evaluation_004.png + :alt: gridded forecast evaluation + :srcset: /tutorials/images/sphx_glr_gridded_forecast_evaluation_004.png + :class: sphx-glr-multi-img + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Plotting ROC curve from the contingency table + Plotting Molchan curve from the contingency table and the Area Skill Score + + + + +.. GENERATED FROM PYTHON SOURCE LINES 155-160 + +Calculate Kagan's I_1 score +--------------------------- + +We can also get the Kagan's I_1 score for a gridded forecast +(see Kagan, YanY. [2009] Testing long-term earthquake forecasts: likelihood methods and error diagrams, Geophys. J. Int., v.177, pages 532-542). + +.. GENERATED FROM PYTHON SOURCE LINES 160-163 + +.. code-block:: Python + + + I_1 = get_Kagan_I1_score(forecast, catalog) + print("I_1score is: ", I_1) + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + I_1score is: [2.31435371] + + + + + +.. rst-class:: sphx-glr-timing + + **Total running time of the script:** (0 minutes 20.346 seconds) + + +.. _sphx_glr_download_tutorials_gridded_forecast_evaluation.py: + +.. only:: html + + .. container:: sphx-glr-footer sphx-glr-footer-example + + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: gridded_forecast_evaluation.ipynb ` + + .. container:: sphx-glr-download sphx-glr-download-python + + :download:`Download Python source code: gridded_forecast_evaluation.py ` + + .. container:: sphx-glr-download sphx-glr-download-zip + + :download:`Download zipped: gridded_forecast_evaluation.zip ` + + +.. only:: html + + .. rst-class:: sphx-glr-signature + + `Gallery generated by Sphinx-Gallery `_ diff --git a/_sources/tutorials/index.rst.txt b/_sources/tutorials/index.rst.txt new file mode 100644 index 00000000..383897ae --- /dev/null +++ b/_sources/tutorials/index.rst.txt @@ -0,0 +1,158 @@ +:orphan: + +Tutorials +========= + + + + +.. raw:: html + +
+ +.. thumbnail-parent-div-open + +.. raw:: html + +
+ +.. only:: html + + .. image:: /tutorials/images/thumb/sphx_glr_catalog_filtering_thumb.png + :alt: + + :ref:`sphx_glr_tutorials_catalog_filtering.py` + +.. raw:: html + +
Catalogs operations
+
+ + +.. raw:: html + +
+ +.. only:: html + + .. image:: /tutorials/images/thumb/sphx_glr_catalog_forecast_evaluation_thumb.png + :alt: + + :ref:`sphx_glr_tutorials_catalog_forecast_evaluation.py` + +.. raw:: html + +
Catalog-based Forecast Evaluation
+
+ + +.. raw:: html + +
+ +.. only:: html + + .. image:: /tutorials/images/thumb/sphx_glr_gridded_forecast_evaluation_thumb.png + :alt: + + :ref:`sphx_glr_tutorials_gridded_forecast_evaluation.py` + +.. raw:: html + +
Grid-based Forecast Evaluation
+
+ + +.. raw:: html + +
+ +.. only:: html + + .. image:: /tutorials/images/thumb/sphx_glr_plot_customizations_thumb.png + :alt: + + :ref:`sphx_glr_tutorials_plot_customizations.py` + +.. raw:: html + +
Plot customizations
+
+ + +.. raw:: html + +
+ +.. only:: html + + .. image:: /tutorials/images/thumb/sphx_glr_plot_gridded_forecast_thumb.png + :alt: + + :ref:`sphx_glr_tutorials_plot_gridded_forecast.py` + +.. raw:: html + +
Plotting gridded forecast
+
+ + +.. raw:: html + +
+ +.. only:: html + + .. image:: /tutorials/images/thumb/sphx_glr_quadtree_gridded_forecast_evaluation_thumb.png + :alt: + + :ref:`sphx_glr_tutorials_quadtree_gridded_forecast_evaluation.py` + +.. raw:: html + +
Quadtree Grid-based Forecast Evaluation
+
+ + +.. raw:: html + +
+ +.. only:: html + + .. image:: /tutorials/images/thumb/sphx_glr_working_with_catalog_forecasts_thumb.png + :alt: + + :ref:`sphx_glr_tutorials_working_with_catalog_forecasts.py` + +.. raw:: html + +
Working with catalog-based forecasts
+
+ + +.. thumbnail-parent-div-close + +.. raw:: html + +
+ + +.. toctree:: + :hidden: + + /tutorials/catalog_filtering + /tutorials/catalog_forecast_evaluation + /tutorials/gridded_forecast_evaluation + /tutorials/plot_customizations + /tutorials/plot_gridded_forecast + /tutorials/quadtree_gridded_forecast_evaluation + /tutorials/working_with_catalog_forecasts + + + +.. only:: html + + .. rst-class:: sphx-glr-signature + + `Gallery generated by Sphinx-Gallery `_ diff --git a/_sources/tutorials/plot_customizations.rst.txt b/_sources/tutorials/plot_customizations.rst.txt new file mode 100644 index 00000000..f1df41ee --- /dev/null +++ b/_sources/tutorials/plot_customizations.rst.txt @@ -0,0 +1,453 @@ + +.. DO NOT EDIT. +.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. +.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: +.. "tutorials/plot_customizations.py" +.. LINE NUMBERS ARE GIVEN BELOW. + +.. only:: html + + .. note:: + :class: sphx-glr-download-link-note + + :ref:`Go to the end ` + to download the full example code. + +.. rst-class:: sphx-glr-example-title + +.. _sphx_glr_tutorials_plot_customizations.py: + + +Plot customizations +=================== + +This example shows how to include some advanced options in the spatial visualization +of Gridded Forecasts and Evaluation Results + +Overview: + 1. Define optional plotting arguments + 2. Set extent of maps + 3. Visualizing selected magnitude bins + 4. Plot global maps + 5. Plot multiple Evaluation Results + +.. GENERATED FROM PYTHON SOURCE LINES 18-20 + +Example 1: Spatial dataset plot arguments +----------------------------------------- + +.. GENERATED FROM PYTHON SOURCE LINES 22-23 + +**Load required libraries** + +.. GENERATED FROM PYTHON SOURCE LINES 23-31 + +.. code-block:: Python + + + import csep + import cartopy + import numpy + from csep.utils import datasets, plots + + import matplotlib.pyplot as plt + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 32-34 + +**Load a Grid Forecast from the datasets** + + +.. GENERATED FROM PYTHON SOURCE LINES 34-36 + +.. code-block:: Python + + forecast = csep.load_gridded_forecast(datasets.hires_ssm_italy_fname, + name='Werner, et al (2010) Italy') + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 37-40 + +**Selecting plotting arguments** + +Create a dictionary containing the plot arguments + +.. GENERATED FROM PYTHON SOURCE LINES 40-48 + +.. code-block:: Python + + args_dict = {'title': 'Italy 10 year forecast', + 'grid_labels': True, + 'borders': True, + 'feature_lw': 0.5, + 'basemap': 'ESRI_imagery', + 'cmap': 'rainbow', + 'alpha_exp': 0.8, + 'projection': cartopy.crs.Mercator()} + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 49-61 + +These arguments are, in order: + +* Assign a title +* Set labels to the geographic axes +* Draw country borders +* Set a linewidth of 0.5 to country borders +* Select ESRI Imagery as a basemap. +* Assign ``'rainbow'`` as colormap. Possible values from from ``matplotlib.cm`` library +* Defines 0.8 for an exponential transparency function (default is 0 for constant alpha, whereas 1 for linear). +* An object cartopy.crs.Projection() is passed as Projection to the map + +The complete description of plot arguments can be found in :func:`csep.utils.plots.plot_spatial_dataset` + +.. GENERATED FROM PYTHON SOURCE LINES 63-66 + +**Plotting the dataset** + +The map `extent` can be defined. Otherwise, the extent of the data would be used. The dictionary defined must be passed as argument + +.. GENERATED FROM PYTHON SOURCE LINES 66-71 + +.. code-block:: Python + + + ax = forecast.plot(extent=[3, 22, 35, 48], + show=True, + plot_args=args_dict) + + + + +.. image-sg:: /tutorials/images/sphx_glr_plot_customizations_001.png + :alt: Italy 10 year forecast + :srcset: /tutorials/images/sphx_glr_plot_customizations_001.png + :class: sphx-glr-single-img + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 72-79 + +Example 2: Plot a global forecast and a selected magnitude bin range +-------------------------------------------------------------------- + + +**Load a Global Forecast from the datasets** + +A downsampled version of the `GEAR1 `_ forecast can be found in datasets. + +.. GENERATED FROM PYTHON SOURCE LINES 79-83 + +.. code-block:: Python + + + forecast = csep.load_gridded_forecast(datasets.gear1_downsampled_fname, + name='GEAR1 Forecast (downsampled)') + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 84-87 + +**Filter by magnitudes** + +We get the rate of events of 5.95<=M_w<=7.5 + +.. GENERATED FROM PYTHON SOURCE LINES 87-94 + +.. code-block:: Python + + + low_bound = 6.15 + upper_bound = 7.55 + mw_bins = forecast.get_magnitudes() + mw_ind = numpy.where(numpy.logical_and( mw_bins >= low_bound, mw_bins <= upper_bound))[0] + rates_mw = forecast.data[:, mw_ind] + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 95-96 + +We get the total rate between these magnitudes + +.. GENERATED FROM PYTHON SOURCE LINES 96-99 + +.. code-block:: Python + + + rate_sum = rates_mw.sum(axis=1) + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 100-101 + +The data is stored in a 1D array, so it should be projected into `region` 2D cartesian grid. + +.. GENERATED FROM PYTHON SOURCE LINES 101-104 + +.. code-block:: Python + + + rate_sum = forecast.region.get_cartesian(rate_sum) + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 105-108 + +**Define plot arguments** + +We define the arguments and a global projection, centered at $lon=-180$ + +.. GENERATED FROM PYTHON SOURCE LINES 108-117 + +.. code-block:: Python + + + plot_args = {'figsize': (10,6), 'coastline':True, 'feature_color':'black', + 'projection': cartopy.crs.Robinson(central_longitude=180.0), + 'title': forecast.name, 'grid_labels': False, + 'cmap': 'magma', + 'clabel': r'$\log_{10}\lambda\left(M_w \in [{%.2f},\,{%.2f}]\right)$ per ' + r'${%.1f}^\circ\times {%.1f}^\circ $ per forecast period' % + (low_bound, upper_bound, forecast.region.dh, forecast.region.dh)} + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 118-121 + +**Plotting the dataset** +To plot a global forecast, we must assign the option ``set_global=True``, which is required by :ref:cartopy to handle +internally the extent of the plot + +.. GENERATED FROM PYTHON SOURCE LINES 121-126 + +.. code-block:: Python + + + ax = plots.plot_spatial_dataset(numpy.log10(rate_sum), forecast.region, + show=True, set_global=True, + plot_args=plot_args) + + + + +.. image-sg:: /tutorials/images/sphx_glr_plot_customizations_002.png + :alt: GEAR1 Forecast (downsampled) + :srcset: /tutorials/images/sphx_glr_plot_customizations_002.png + :class: sphx-glr-single-img + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 131-133 + +Example 3: Plot a catalog +------------------------------------------- + +.. GENERATED FROM PYTHON SOURCE LINES 135-136 + +**Load a Catalog from ComCat** + +.. GENERATED FROM PYTHON SOURCE LINES 136-152 + +.. code-block:: Python + + + start_time = csep.utils.time_utils.strptime_to_utc_datetime('1995-01-01 00:00:00.0') + end_time = csep.utils.time_utils.strptime_to_utc_datetime('2015-01-01 00:00:00.0') + min_mag = 3.95 + catalog = csep.query_comcat(start_time, end_time, min_magnitude=min_mag, verbose=False) + + # **Define plotting arguments** + plot_args = {'basemap': 'ESRI_terrain', + 'markersize': 2, + 'markercolor': 'red', + 'alpha': 0.3, + 'mag_scale': 7, + 'legend': True, + 'legend_loc': 3, + 'mag_ticks': [4.0, 5.0, 6.0, 7.0]} + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Fetched ComCat catalog in 20.1857488155365 seconds. + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 153-163 + +These arguments are, in order: + +* Assign as basemap the ESRI_terrain webservice +* Set minimum markersize of 2 with red color +* Set a 0.3 transparency +* mag_scale is used to exponentially scale the size with respect to magnitude. Recommended 1-8 +* Set legend True and location in 3 (lower-left corner) +* Set a list of Magnitude ticks to display in the legend + +The complete description of plot arguments can be found in :func:`csep.utils.plots.plot_catalog` + +.. GENERATED FROM PYTHON SOURCE LINES 165-170 + +.. code-block:: Python + + + # **Plot the catalog** + ax = catalog.plot(show=False, plot_args=plot_args) + + + + + +.. image-sg:: /tutorials/images/sphx_glr_plot_customizations_003.png + :alt: plot customizations + :srcset: /tutorials/images/sphx_glr_plot_customizations_003.png + :class: sphx-glr-single-img + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 171-173 + +Example 4: Plot multiple evaluation results +------------------------------------------- + +.. GENERATED FROM PYTHON SOURCE LINES 175-177 + +Load L-test results from example .json files (See +:doc:`gridded_forecast_evaluation` for information on calculating and storing evaluation results) + +.. GENERATED FROM PYTHON SOURCE LINES 177-190 + +.. code-block:: Python + + + L_results = [csep.load_evaluation_result(i) for i in datasets.l_test_examples] + args = {'figsize': (6,5), + 'title': r'$\mathcal{L}-\mathrm{test}$', + 'title_fontsize': 18, + 'xlabel': 'Log-likelihood', + 'xticks_fontsize': 9, + 'ylabel_fontsize': 9, + 'linewidth': 0.8, + 'capsize': 3, + 'hbars':True, + 'tight_layout': True} + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 191-194 + +Description of plot arguments can be found in :func:`plot_poisson_consistency_test`. +We set ``one_sided_lower=True`` as usual for an L-test, where the model is rejected if the observed +is located within the lower tail of the simulated distribution. + +.. GENERATED FROM PYTHON SOURCE LINES 194-200 + +.. code-block:: Python + + ax = plots.plot_poisson_consistency_test(L_results, one_sided_lower=True, plot_args=args) + + # Needed to show plots if running as script + plt.show() + + + + + +.. image-sg:: /tutorials/images/sphx_glr_plot_customizations_004.png + :alt: $\mathcal{L}-\mathrm{test}$ + :srcset: /tutorials/images/sphx_glr_plot_customizations_004.png + :class: sphx-glr-single-img + + + + + + +.. rst-class:: sphx-glr-timing + + **Total running time of the script:** (0 minutes 46.068 seconds) + + +.. _sphx_glr_download_tutorials_plot_customizations.py: + +.. only:: html + + .. container:: sphx-glr-footer sphx-glr-footer-example + + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_customizations.ipynb ` + + .. container:: sphx-glr-download sphx-glr-download-python + + :download:`Download Python source code: plot_customizations.py ` + + .. container:: sphx-glr-download sphx-glr-download-zip + + :download:`Download zipped: plot_customizations.zip ` + + +.. only:: html + + .. rst-class:: sphx-glr-signature + + `Gallery generated by Sphinx-Gallery `_ diff --git a/_sources/tutorials/plot_gridded_forecast.rst.txt b/_sources/tutorials/plot_gridded_forecast.rst.txt new file mode 100644 index 00000000..1c68749f --- /dev/null +++ b/_sources/tutorials/plot_gridded_forecast.rst.txt @@ -0,0 +1,154 @@ + +.. DO NOT EDIT. +.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. +.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: +.. "tutorials/plot_gridded_forecast.py" +.. LINE NUMBERS ARE GIVEN BELOW. + +.. only:: html + + .. note:: + :class: sphx-glr-download-link-note + + :ref:`Go to the end ` + to download the full example code. + +.. rst-class:: sphx-glr-example-title + +.. _sphx_glr_tutorials_plot_gridded_forecast.py: + + +Plotting gridded forecast +========================= + +This example show you how to load a gridded forecast stored in the default ASCII format. + +.. GENERATED FROM PYTHON SOURCE LINES 9-14 + +Load required libraries +----------------------- + +Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +:mod:`csep.utils` subpackage. + +.. GENERATED FROM PYTHON SOURCE LINES 14-18 + +.. code-block:: Python + + + import csep + from csep.utils import datasets, time_utils + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 19-25 + +Define forecast properties +-------------------------- + +We choose a :ref:`time-independent-forecast` to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note, +the start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts +because they can be rescale to any arbitrary time period. + +.. GENERATED FROM PYTHON SOURCE LINES 25-29 + +.. code-block:: Python + + + start_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0') + end_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0') + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 30-35 + +Load forecast +------------- + +For this example, we provide the example forecast data set along with the main repository. The filepath is relative +to the root directory of the package. You can specify any file location for your forecasts. + +.. GENERATED FROM PYTHON SOURCE LINES 35-41 + +.. code-block:: Python + + + forecast = csep.load_gridded_forecast(datasets.helmstetter_mainshock_fname, + start_date=start_date, + end_date=end_date, + name='helmstetter_mainshock') + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 42-47 + +Plot forecast +------------- + +The forecast object provides :meth:`csep.core.forecasts.GriddedForecast.plot` to plot a gridded forecast. This function +returns a matplotlib axes, so more specific attributes can be set on the figure. + +.. GENERATED FROM PYTHON SOURCE LINES 47-50 + +.. code-block:: Python + + + ax = forecast.plot(show=True) + + + + +.. image-sg:: /tutorials/images/sphx_glr_plot_gridded_forecast_001.png + :alt: helmstetter_mainshock + :srcset: /tutorials/images/sphx_glr_plot_gridded_forecast_001.png + :class: sphx-glr-single-img + + + + + + +.. rst-class:: sphx-glr-timing + + **Total running time of the script:** (0 minutes 0.938 seconds) + + +.. _sphx_glr_download_tutorials_plot_gridded_forecast.py: + +.. only:: html + + .. container:: sphx-glr-footer sphx-glr-footer-example + + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_gridded_forecast.ipynb ` + + .. container:: sphx-glr-download sphx-glr-download-python + + :download:`Download Python source code: plot_gridded_forecast.py ` + + .. container:: sphx-glr-download sphx-glr-download-zip + + :download:`Download zipped: plot_gridded_forecast.zip ` + + +.. only:: html + + .. rst-class:: sphx-glr-signature + + `Gallery generated by Sphinx-Gallery `_ diff --git a/_sources/tutorials/quadtree_gridded_forecast_evaluation.rst.txt b/_sources/tutorials/quadtree_gridded_forecast_evaluation.rst.txt new file mode 100644 index 00000000..48ff125d --- /dev/null +++ b/_sources/tutorials/quadtree_gridded_forecast_evaluation.rst.txt @@ -0,0 +1,445 @@ + +.. DO NOT EDIT. +.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. +.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: +.. "tutorials/quadtree_gridded_forecast_evaluation.py" +.. LINE NUMBERS ARE GIVEN BELOW. + +.. only:: html + + .. note:: + :class: sphx-glr-download-link-note + + :ref:`Go to the end ` + to download the full example code. + +.. rst-class:: sphx-glr-example-title + +.. _sphx_glr_tutorials_quadtree_gridded_forecast_evaluation.py: + + +.. _quadtree_gridded-forecast-evaluation: + +Quadtree Grid-based Forecast Evaluation +======================================= + +This example demonstrates how to create a quadtree based single resolution-grid and multi-resolution grid. +Multi-resolution grid is created using earthquake catalog, in which seismic density determines the size of a grid cell. +In creating a multi-resolution grid we select a threshold (:math:`N_{max}`) as a maximum number of earthquake in each cell. +In single-resolution grid, we simply select a zoom-level (L) that determines a single resolution grid. +The number of cells in single-resolution grid are equal to :math:`4^L`. The zoom-level L=11 leads to 4.2 million cells, nearest to 0.1x0.1 grid. + +We use these grids to create and evaluate a time-independent forecast. Grid-based +forecasts assume the variability of the forecasts is Poissonian. Therefore, poisson-based evaluations +should be used to evaluate grid-based forecasts defined using quadtree regions. + +Overview: + 1. Define spatial grids + - Multi-resolution grid + - Single-resolution grid + 2. Load forecasts + - Multi-resolution forecast + - Single-resolution forecast + 3. Load evaluation catalog + 4. Apply Poissonian evaluations for both grid-based forecasts + 5. Visualize evaluation results + +.. GENERATED FROM PYTHON SOURCE LINES 31-36 + +Load required libraries +----------------------- + +Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +:mod:`csep.utils` subpackage. + +.. GENERATED FROM PYTHON SOURCE LINES 36-45 + +.. code-block:: Python + + import numpy + import pandas + from csep.core import poisson_evaluations as poisson + from csep.utils import time_utils, plots + from csep.core.regions import QuadtreeGrid2D + from csep.core.forecasts import GriddedForecast + from csep.utils.time_utils import decimal_year_to_utc_epoch + from csep.core.catalogs import CSEPCatalog + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 46-52 + +Load Training Catalog for Multi-resolution grid +---------------------------------------------- + +We define a multi-resolution quadtree using earthquake catalog. We load a training catalog in CSEP and use that catalog to create a multi-resolution grid. +Sometimes, we do not the catalog in exact format as requried by pyCSEP. So we can read a catalog using Pandas and convert it +into the format accepable by PyCSEP. Then we instantiate an object of class CSEPCatalog by calling function :func:`csep.core.regions.CSEPCatalog.from_dataframe` + +.. GENERATED FROM PYTHON SOURCE LINES 52-71 + +.. code-block:: Python + + + dfcat = pandas.read_csv('cat_train_2013.csv') + column_name_mapper = { + 'lon': 'longitude', + 'lat': 'latitude', + 'mag': 'magnitude', + 'index': 'id' + } + + # maps the column names to the dtype expected by the catalog class + dfcat = dfcat.reset_index().rename(columns=column_name_mapper) + + # create the origin_times from decimal years + dfcat['origin_time'] = dfcat.apply(lambda row: decimal_year_to_utc_epoch(row.year), axis=1) + + # create catalog from dataframe + catalog_train = CSEPCatalog.from_dataframe(dfcat) + print(catalog_train) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + + Name: None + + Start Date: 1976-01-01 00:00:00+00:00 + End Date: 2013-01-01 00:00:00+00:00 + + Latitude: (-77.16000366, 87.01999664) + Longitude: (-180.0, 180.0) + + Min Mw: 5.150024414 + Max Mw: 9.08350563 + + Event Count: 28465 + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 72-77 + +Define Multi-resolution Gridded Region +------------------------------------------------ +Now use define a threshold for maximum number of earthquake allowed per cell, i.e. Nmax +and call :func:`csep.core.regions.QuadtreeGrid_from_catalog` to create a multi-resolution grid. +For simplicity we assume only single magnitude bin, i.e. all the earthquakes greater than and equal to 5.95 + +.. GENERATED FROM PYTHON SOURCE LINES 77-84 + +.. code-block:: Python + + + mbins = numpy.array([5.95]) + Nmax = 25 + r_multi = QuadtreeGrid2D.from_catalog(catalog_train, Nmax, magnitudes=mbins) + print('Number of cells in Multi-resolution grid :', r_multi.num_nodes) + + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Number of cells in Multi-resolution grid : 3502 + + + + +.. GENERATED FROM PYTHON SOURCE LINES 85-90 + +Define Single-resolution Gridded Region +---------------------------------------- + +Here as an example we define a single resolution grid at zoom-level L=6. For this purpose +we call :func:`csep.core.regions.QuadtreeGrid2D_from_single_resolution` to create a single resolution grid. + +.. GENERATED FROM PYTHON SOURCE LINES 90-99 + +.. code-block:: Python + + + # For simplicity of example, we assume only single magnitude bin, + # i.e. all the earthquakes greater than and equal to 5.95 + + mbins = numpy.array([5.95]) + r_single = QuadtreeGrid2D.from_single_resolution(6, magnitudes=mbins) + print('Number of cells in Single-Resolution grid :', r_single.num_nodes) + + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Number of cells in Single-Resolution grid : 4096 + + + + +.. GENERATED FROM PYTHON SOURCE LINES 100-105 + +Load forecast of multi-resolution grid +--------------------------------- +An example time-independent forecast had been created for this grid and provided the example forecast data set along with the main repository. +We load the time-independent global forecast which has time horizon of 1 year. +The filepath is relative to the root directory of the package. You can specify any file location for your forecasts. + +.. GENERATED FROM PYTHON SOURCE LINES 105-119 + +.. code-block:: Python + + + forecast_data = numpy.loadtxt('example_rate_zoom=EQ10L11.csv') + #Reshape forecast as Nx1 array + forecast_data = forecast_data.reshape(-1,1) + + forecast_multi_grid = GriddedForecast(data = forecast_data, region = r_multi, magnitudes = mbins, name = 'Example Multi-res Forecast') + + #The loaded forecast is for 1 year. The test catalog we will use to evaluate is for 6 years. So we can rescale the forecast. + print(f"expected event count before scaling: {forecast_multi_grid.event_count}") + forecast_multi_grid.scale(6) + print(f"expected event count after scaling: {forecast_multi_grid.event_count}") + + + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + expected event count before scaling: 116.18568954606255 + expected event count after scaling: 697.1141372763753 + + + + +.. GENERATED FROM PYTHON SOURCE LINES 120-124 + +Load forecast of single-resolution grid +------------------------------------- +We have already created a time-independent global forecast with time horizon of 1 year and provided with the reporsitory. +The filepath is relative to the root directory of the package. You can specify any file location for your forecasts. + +.. GENERATED FROM PYTHON SOURCE LINES 124-139 + +.. code-block:: Python + + + forecast_data = numpy.loadtxt('example_rate_zoom=6.csv') + #Reshape forecast as Nx1 array + forecast_data = forecast_data.reshape(-1,1) + + forecast_single_grid = GriddedForecast(data = forecast_data, region = r_single, + magnitudes = mbins, name = 'Example Single-res Forecast') + + # The loaded forecast is for 1 year. The test catalog we will use is for 6 years. So we can rescale the forecast. + print(f"expected event count before scaling: {forecast_single_grid.event_count}") + forecast_single_grid.scale(6) + print(f"expected event count after scaling: {forecast_single_grid.event_count}") + + + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + expected event count before scaling: 116.18568954606256 + expected event count after scaling: 697.1141372763753 + + + + +.. GENERATED FROM PYTHON SOURCE LINES 140-145 + +Load evaluation catalog +----------------------- + +We have a test catalog stored here. We can read the test catalog as a pandas frame and convert it into a format that is acceptable to PyCSEP +Then we instantiate an object of catalog + +.. GENERATED FROM PYTHON SOURCE LINES 145-163 + +.. code-block:: Python + + + dfcat = pandas.read_csv('cat_test.csv') + + column_name_mapper = { + 'lon': 'longitude', + 'lat': 'latitude', + 'mag': 'magnitude' + } + + # maps the column names to the dtype expected by the catalog class + dfcat = dfcat.reset_index().rename(columns=column_name_mapper) + # create the origin_times from decimal years + dfcat['origin_time'] = dfcat.apply(lambda row: decimal_year_to_utc_epoch(row.year), axis=1) + + # create catalog from dataframe + catalog = CSEPCatalog.from_dataframe(dfcat) + print(catalog) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + + Name: None + + Start Date: 2014-01-01 00:00:00+00:00 + End Date: 2019-01-01 00:00:00+00:00 + + Latitude: (-63.26, 74.39) + Longitude: (-179.23, 179.66) + + Min Mw: 5.95047692260089 + Max Mw: 8.27271203001144 + + Event Count: 651 + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 164-173 + +Compute Poisson spatial test and Number test +------------------------------------------------------ + +Simply call the :func:`csep.core.poisson_evaluations.spatial_test` and :func:`csep.core.poisson_evaluations.number_test` functions to evaluate the forecast using the specified +evaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose +option prints the status of the simulations to the standard output. + +Note: But before we use evaluation catalog, we need to link gridded region with observed catalog. +Since we have two different grids here, so we do it separately for both grids. + +.. GENERATED FROM PYTHON SOURCE LINES 173-186 + +.. code-block:: Python + + + #For Multi-resolution grid, linking region to catalog. + catalog.region = forecast_multi_grid.region + spatial_test_multi_res_result = poisson.spatial_test(forecast_multi_grid, catalog) + number_test_multi_res_result = poisson.number_test(forecast_multi_grid, catalog) + + + #For Single-resolution grid, linking region to catalog. + catalog.region = forecast_single_grid.region + spatial_test_single_res_result = poisson.spatial_test(forecast_single_grid, catalog) + number_test_single_res_result = poisson.number_test(forecast_single_grid, catalog) + + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 187-192 + +Plot spatial test results +------------------------- + +We provide the function :func:`csep.utils.plotting.plot_poisson_consistency_test` to visualize the evaluation results from +consistency tests. + +.. GENERATED FROM PYTHON SOURCE LINES 192-201 + +.. code-block:: Python + + + + stest_result = [spatial_test_single_res_result, spatial_test_multi_res_result] + ax_spatial = plots.plot_poisson_consistency_test(stest_result, + plot_args={'xlabel': 'Spatial likelihood'}) + + ntest_result = [number_test_single_res_result, number_test_multi_res_result] + ax_number = plots.plot_poisson_consistency_test(ntest_result, + plot_args={'xlabel': 'Number of Earthquakes'}) + + + +.. rst-class:: sphx-glr-horizontal + + + * + + .. image-sg:: /tutorials/images/sphx_glr_quadtree_gridded_forecast_evaluation_001.png + :alt: Poisson S-Test + :srcset: /tutorials/images/sphx_glr_quadtree_gridded_forecast_evaluation_001.png + :class: sphx-glr-multi-img + + * + + .. image-sg:: /tutorials/images/sphx_glr_quadtree_gridded_forecast_evaluation_002.png + :alt: Poisson N-Test + :srcset: /tutorials/images/sphx_glr_quadtree_gridded_forecast_evaluation_002.png + :class: sphx-glr-multi-img + + + + + + +.. rst-class:: sphx-glr-timing + + **Total running time of the script:** (0 minutes 1.811 seconds) + + +.. _sphx_glr_download_tutorials_quadtree_gridded_forecast_evaluation.py: + +.. only:: html + + .. container:: sphx-glr-footer sphx-glr-footer-example + + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: quadtree_gridded_forecast_evaluation.ipynb ` + + .. container:: sphx-glr-download sphx-glr-download-python + + :download:`Download Python source code: quadtree_gridded_forecast_evaluation.py ` + + .. container:: sphx-glr-download sphx-glr-download-zip + + :download:`Download zipped: quadtree_gridded_forecast_evaluation.zip ` + + +.. only:: html + + .. rst-class:: sphx-glr-signature + + `Gallery generated by Sphinx-Gallery `_ diff --git a/_sources/tutorials/working_with_catalog_forecasts.rst.txt b/_sources/tutorials/working_with_catalog_forecasts.rst.txt new file mode 100644 index 00000000..f4358fdc --- /dev/null +++ b/_sources/tutorials/working_with_catalog_forecasts.rst.txt @@ -0,0 +1,264 @@ + +.. DO NOT EDIT. +.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. +.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: +.. "tutorials/working_with_catalog_forecasts.py" +.. LINE NUMBERS ARE GIVEN BELOW. + +.. only:: html + + .. note:: + :class: sphx-glr-download-link-note + + :ref:`Go to the end ` + to download the full example code. + +.. rst-class:: sphx-glr-example-title + +.. _sphx_glr_tutorials_working_with_catalog_forecasts.py: + + +Working with catalog-based forecasts +==================================== + +This example shows some basic interactions with data-based forecasts. We will load in a forecast stored in the CSEP +data format, and compute the expected rates on a 0.1° x 0.1° grid covering the state of California. We will plot the +expected rates in the spatial cells. + +Overview: + 1. Define forecast properties (time horizon, spatial region, etc). + 2. Compute the expected rates in space and magnitude bins + 3. Plot expected rates in the spatial cells + +.. GENERATED FROM PYTHON SOURCE LINES 16-21 + +Load required libraries +----------------------- + +Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the +:mod:`csep.utils` subpackage. + +.. GENERATED FROM PYTHON SOURCE LINES 21-28 + +.. code-block:: Python + + + import numpy + + import csep + from csep.core import regions + from csep.utils import datasets + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 29-34 + +Load data forecast +--------------------- + +PyCSEP contains some basic forecasts that can be used to test of the functionality of the package. This forecast has already +been filtered to the California RELM region. + +.. GENERATED FROM PYTHON SOURCE LINES 34-37 + +.. code-block:: Python + + + forecast = csep.load_catalog_forecast(datasets.ucerf3_ascii_format_landers_fname) + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 38-45 + +Define spatial and magnitude regions +------------------------------------ + +Before we can compute the bin-wise rates we need to define a spatial region and a set of magnitude bin edges. The magnitude +bin edges # are the lower bound (inclusive) except for the last bin, which is treated as extending to infinity. We can +bind these # to the forecast object. This can also be done by passing them as keyword arguments +into :func:`csep.load_catalog_forecast`. + +.. GENERATED FROM PYTHON SOURCE LINES 45-58 + +.. code-block:: Python + + + # Magnitude bins properties + min_mw = 4.95 + max_mw = 8.95 + dmw = 0.1 + + # Create space and magnitude regions + magnitudes = regions.magnitude_bins(min_mw, max_mw, dmw) + region = regions.california_relm_region() + + # Bind region information to the forecast (this will be used for binning of the catalogs) + forecast.region = regions.create_space_magnitude_region(region, magnitudes) + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 59-64 + +Compute spatial event counts +---------------------------- + +The :class:`csep.core.forecasts.CatalogForecast` provides a method to compute the expected number of events in spatial cells. This +requires a region with magnitude information. + +.. GENERATED FROM PYTHON SOURCE LINES 64-68 + +.. code-block:: Python + + + _ = forecast.get_expected_rates(verbose=True) + + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Processed 1 catalogs in 0.001 seconds + Processed 2 catalogs in 0.002 seconds + Processed 3 catalogs in 0.003 seconds + Processed 4 catalogs in 0.003 seconds + Processed 5 catalogs in 0.004 seconds + Processed 6 catalogs in 0.005 seconds + Processed 7 catalogs in 0.005 seconds + Processed 8 catalogs in 0.006 seconds + Processed 9 catalogs in 0.007 seconds + Processed 10 catalogs in 0.008 seconds + Processed 20 catalogs in 0.014 seconds + Processed 30 catalogs in 0.021 seconds + Processed 40 catalogs in 0.028 seconds + Processed 50 catalogs in 0.035 seconds + Processed 60 catalogs in 0.042 seconds + Processed 70 catalogs in 0.048 seconds + Processed 80 catalogs in 0.054 seconds + Processed 90 catalogs in 0.061 seconds + Processed 100 catalogs in 0.067 seconds + Processed 200 catalogs in 0.129 seconds + Processed 300 catalogs in 0.194 seconds + Processed 400 catalogs in 0.258 seconds + Processed 500 catalogs in 0.351 seconds + Processed 600 catalogs in 0.414 seconds + Processed 700 catalogs in 0.478 seconds + Processed 800 catalogs in 0.574 seconds + Processed 900 catalogs in 0.636 seconds + Processed 1000 catalogs in 0.699 seconds + Processed 2000 catalogs in 1.473 seconds + Processed 3000 catalogs in 2.205 seconds + Processed 4000 catalogs in 2.973 seconds + Processed 5000 catalogs in 3.749 seconds + Processed 6000 catalogs in 4.513 seconds + Processed 7000 catalogs in 5.284 seconds + Processed 8000 catalogs in 6.000 seconds + Processed 9000 catalogs in 6.814 seconds + Processed 10000 catalogs in 7.559 seconds + + + + +.. GENERATED FROM PYTHON SOURCE LINES 69-73 + +Plot expected event counts +-------------------------- + +We can plot the expected event counts the same way that we plot a :class:`csep.core.forecasts.GriddedForecast` + +.. GENERATED FROM PYTHON SOURCE LINES 73-76 + +.. code-block:: Python + + + ax = forecast.expected_rates.plot(plot_args={'clim': [-3.5, 0]}, show=True) + + + + +.. image-sg:: /tutorials/images/sphx_glr_working_with_catalog_forecasts_001.png + :alt: ucerf3-landers + :srcset: /tutorials/images/sphx_glr_working_with_catalog_forecasts_001.png + :class: sphx-glr-single-img + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 77-78 + +The images holes in the image are due to under-sampling from the forecast. + +.. GENERATED FROM PYTHON SOURCE LINES 80-86 + +Quick sanity check +------------------ + +The forecasts were filtered to the spatial region so all events should be binned. We loop through each data in the forecast and +count the number of events and compare that with the expected rates. The expected rate is an average in each space-magnitude bin, so +we have to multiply this value by the number of catalogs in the forecast. + +.. GENERATED FROM PYTHON SOURCE LINES 86-91 + +.. code-block:: Python + + + total_events = 0 + for catalog in forecast: + total_events += catalog.event_count + numpy.testing.assert_allclose(total_events, forecast.expected_rates.sum() * forecast.n_cat) + + + + + + + + +.. rst-class:: sphx-glr-timing + + **Total running time of the script:** (0 minutes 9.227 seconds) + + +.. _sphx_glr_download_tutorials_working_with_catalog_forecasts.py: + +.. only:: html + + .. container:: sphx-glr-footer sphx-glr-footer-example + + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: working_with_catalog_forecasts.ipynb ` + + .. container:: sphx-glr-download sphx-glr-download-python + + :download:`Download Python source code: working_with_catalog_forecasts.py ` + + .. container:: sphx-glr-download sphx-glr-download-zip + + :download:`Download zipped: working_with_catalog_forecasts.zip ` + + +.. only:: html + + .. rst-class:: sphx-glr-signature + + `Gallery generated by Sphinx-Gallery `_ diff --git a/_static/CSEP2_Logo_CMYK.png b/_static/CSEP2_Logo_CMYK.png new file mode 100644 index 00000000..01519a6e Binary files /dev/null and b/_static/CSEP2_Logo_CMYK.png differ diff --git a/_static/_sphinx_javascript_frameworks_compat.js b/_static/_sphinx_javascript_frameworks_compat.js new file mode 100644 index 00000000..81415803 --- /dev/null +++ b/_static/_sphinx_javascript_frameworks_compat.js @@ -0,0 +1,123 @@ +/* Compatability shim for jQuery and underscores.js. + * + * Copyright Sphinx contributors + * Released under the two clause BSD licence + */ + +/** + * small helper function to urldecode strings + * + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent#Decoding_query_parameters_from_a_URL + */ +jQuery.urldecode = function(x) { + if (!x) { + return x + } + return decodeURIComponent(x.replace(/\+/g, ' ')); +}; + +/** + * small helper function to urlencode strings + */ +jQuery.urlencode = encodeURIComponent; + +/** + * This function returns the parsed url parameters of the + * current request. Multiple values per key are supported, + * it will always return arrays of strings for the value parts. + */ +jQuery.getQueryParameters = function(s) { + if (typeof s === 'undefined') + s = document.location.search; + var parts = s.substr(s.indexOf('?') + 1).split('&'); + var result = {}; + for (var i = 0; i < parts.length; i++) { + var tmp = parts[i].split('=', 2); + var key = jQuery.urldecode(tmp[0]); + var value = jQuery.urldecode(tmp[1]); + if (key in result) + result[key].push(value); + else + result[key] = [value]; + } + return result; +}; + +/** + * highlight a given string on a jquery object by wrapping it in + * span elements with the given class name. + */ +jQuery.fn.highlightText = function(text, className) { + function highlight(node, addItems) { + if (node.nodeType === 3) { + var val = node.nodeValue; + var pos = val.toLowerCase().indexOf(text); + if (pos >= 0 && + !jQuery(node.parentNode).hasClass(className) && + !jQuery(node.parentNode).hasClass("nohighlight")) { + var span; + var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.className = className; + } + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + node.parentNode.insertBefore(span, node.parentNode.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling)); + node.nodeValue = val.substr(0, pos); + if (isInSVG) { + var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); + var bbox = node.parentElement.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute('class', className); + addItems.push({ + "parent": node.parentNode, + "target": rect}); + } + } + } + else if (!jQuery(node).is("button, select, textarea")) { + jQuery.each(node.childNodes, function() { + highlight(this, addItems); + }); + } + } + var addItems = []; + var result = this.each(function() { + highlight(this, addItems); + }); + for (var i = 0; i < addItems.length; ++i) { + jQuery(addItems[i].parent).before(addItems[i].target); + } + return result; +}; + +/* + * backward compatibility for jQuery.browser + * This will be supported until firefox bug is fixed. + */ +if (!jQuery.browser) { + jQuery.uaMatch = function(ua) { + ua = ua.toLowerCase(); + + var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || + /(webkit)[ \/]([\w.]+)/.exec(ua) || + /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || + /(msie) ([\w.]+)/.exec(ua) || + ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || + []; + + return { + browser: match[ 1 ] || "", + version: match[ 2 ] || "0" + }; + }; + jQuery.browser = {}; + jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; +} diff --git a/_static/basic.css b/_static/basic.css new file mode 100644 index 00000000..f316efcb --- /dev/null +++ b/_static/basic.css @@ -0,0 +1,925 @@ +/* + * basic.css + * ~~~~~~~~~ + * + * Sphinx stylesheet -- basic theme. + * + * :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/* -- main layout ----------------------------------------------------------- */ + +div.clearer { + clear: both; +} + +div.section::after { + display: block; + content: ''; + clear: left; +} + +/* -- relbar ---------------------------------------------------------------- */ + +div.related { + width: 100%; + font-size: 90%; +} + +div.related h3 { + display: none; +} + +div.related ul { + margin: 0; + padding: 0 0 0 10px; + list-style: none; +} + +div.related li { + display: inline; +} + +div.related li.right { + float: right; + margin-right: 5px; +} + +/* -- sidebar --------------------------------------------------------------- */ + +div.sphinxsidebarwrapper { + padding: 10px 5px 0 10px; +} + +div.sphinxsidebar { + float: left; + width: 230px; + margin-left: -100%; + font-size: 90%; + word-wrap: break-word; + overflow-wrap : break-word; +} + +div.sphinxsidebar ul { + list-style: none; +} + +div.sphinxsidebar ul ul, +div.sphinxsidebar ul.want-points { + margin-left: 20px; + list-style: square; +} + +div.sphinxsidebar ul ul { + margin-top: 0; + margin-bottom: 0; +} + +div.sphinxsidebar form { + margin-top: 10px; +} + +div.sphinxsidebar input { + border: 1px solid #98dbcc; + font-family: sans-serif; + font-size: 1em; +} + +div.sphinxsidebar #searchbox form.search { + overflow: hidden; +} + +div.sphinxsidebar #searchbox input[type="text"] { + float: left; + width: 80%; + padding: 0.25em; + box-sizing: border-box; +} + +div.sphinxsidebar #searchbox input[type="submit"] { + float: left; + width: 20%; + border-left: none; + padding: 0.25em; + box-sizing: border-box; +} + + +img { + border: 0; + max-width: 100%; +} + +/* -- search page ----------------------------------------------------------- */ + +ul.search { + margin: 10px 0 0 20px; + padding: 0; +} + +ul.search li { + padding: 5px 0 5px 20px; + background-image: url(file.png); + background-repeat: no-repeat; + background-position: 0 7px; +} + +ul.search li a { + font-weight: bold; +} + +ul.search li p.context { + color: #888; + margin: 2px 0 0 30px; + text-align: left; +} + +ul.keywordmatches li.goodmatch a { + font-weight: bold; +} + +/* -- index page ------------------------------------------------------------ */ + +table.contentstable { + width: 90%; + margin-left: auto; + margin-right: auto; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} + +/* -- general index --------------------------------------------------------- */ + +table.indextable { + width: 100%; +} + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable ul { + margin-top: 0; + margin-bottom: 0; + list-style-type: none; +} + +table.indextable > tbody > tr > td > ul { + padding-left: 0em; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +div.modindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +div.genindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +/* -- domain module index --------------------------------------------------- */ + +table.modindextable td { + padding: 2px; + border-collapse: collapse; +} + +/* -- general body styles --------------------------------------------------- */ + +div.body { + min-width: 360px; + max-width: 800px; +} + +div.body p, div.body dd, div.body li, div.body blockquote { + -moz-hyphens: auto; + -ms-hyphens: auto; + -webkit-hyphens: auto; + hyphens: auto; +} + +a.headerlink { + visibility: hidden; +} + +a:visited { + color: #551A8B; +} + +h1:hover > a.headerlink, +h2:hover > a.headerlink, +h3:hover > a.headerlink, +h4:hover > a.headerlink, +h5:hover > a.headerlink, +h6:hover > a.headerlink, +dt:hover > a.headerlink, +caption:hover > a.headerlink, +p.caption:hover > a.headerlink, +div.code-block-caption:hover > a.headerlink { + visibility: visible; +} + +div.body p.caption { + text-align: inherit; +} + +div.body td { + text-align: left; +} + +.first { + margin-top: 0 !important; +} + +p.rubric { + margin-top: 30px; + font-weight: bold; +} + +img.align-left, figure.align-left, .figure.align-left, object.align-left { + clear: left; + float: left; + margin-right: 1em; +} + +img.align-right, figure.align-right, .figure.align-right, object.align-right { + clear: right; + float: right; + margin-left: 1em; +} + +img.align-center, figure.align-center, .figure.align-center, object.align-center { + display: block; + margin-left: auto; + margin-right: auto; +} + +img.align-default, figure.align-default, .figure.align-default { + display: block; + margin-left: auto; + margin-right: auto; +} + +.align-left { + text-align: left; +} + +.align-center { + text-align: center; +} + +.align-default { + text-align: center; +} + +.align-right { + text-align: right; +} + +/* -- sidebars -------------------------------------------------------------- */ + +div.sidebar, +aside.sidebar { + margin: 0 0 0.5em 1em; + border: 1px solid #ddb; + padding: 7px; + background-color: #ffe; + width: 40%; + float: right; + clear: right; + overflow-x: auto; +} + +p.sidebar-title { + font-weight: bold; +} + +nav.contents, +aside.topic, +div.admonition, div.topic, blockquote { + clear: left; +} + +/* -- topics ---------------------------------------------------------------- */ + +nav.contents, +aside.topic, +div.topic { + border: 1px solid #ccc; + padding: 7px; + margin: 10px 0 10px 0; +} + +p.topic-title { + font-size: 1.1em; + font-weight: bold; + margin-top: 10px; +} + +/* -- admonitions ----------------------------------------------------------- */ + +div.admonition { + margin-top: 10px; + margin-bottom: 10px; + padding: 7px; +} + +div.admonition dt { + font-weight: bold; +} + +p.admonition-title { + margin: 0px 10px 5px 0px; + font-weight: bold; +} + +div.body p.centered { + text-align: center; + margin-top: 25px; +} + +/* -- content of sidebars/topics/admonitions -------------------------------- */ + +div.sidebar > :last-child, +aside.sidebar > :last-child, +nav.contents > :last-child, +aside.topic > :last-child, +div.topic > :last-child, +div.admonition > :last-child { + margin-bottom: 0; +} + +div.sidebar::after, +aside.sidebar::after, +nav.contents::after, +aside.topic::after, +div.topic::after, +div.admonition::after, +blockquote::after { + display: block; + content: ''; + clear: both; +} + +/* -- tables ---------------------------------------------------------------- */ + +table.docutils { + margin-top: 10px; + margin-bottom: 10px; + border: 0; + border-collapse: collapse; +} + +table.align-center { + margin-left: auto; + margin-right: auto; +} + +table.align-default { + margin-left: auto; + margin-right: auto; +} + +table caption span.caption-number { + font-style: italic; +} + +table caption span.caption-text { +} + +table.docutils td, table.docutils th { + padding: 1px 8px 1px 5px; + border-top: 0; + border-left: 0; + border-right: 0; + border-bottom: 1px solid #aaa; +} + +th { + text-align: left; + padding-right: 5px; +} + +table.citation { + border-left: solid 1px gray; + margin-left: 1px; +} + +table.citation td { + border-bottom: none; +} + +th > :first-child, +td > :first-child { + margin-top: 0px; +} + +th > :last-child, +td > :last-child { + margin-bottom: 0px; +} + +/* -- figures --------------------------------------------------------------- */ + +div.figure, figure { + margin: 0.5em; + padding: 0.5em; +} + +div.figure p.caption, figcaption { + padding: 0.3em; +} + +div.figure p.caption span.caption-number, +figcaption span.caption-number { + font-style: italic; +} + +div.figure p.caption span.caption-text, +figcaption span.caption-text { +} + +/* -- field list styles ----------------------------------------------------- */ + +table.field-list td, table.field-list th { + border: 0 !important; +} + +.field-list ul { + margin: 0; + padding-left: 1em; +} + +.field-list p { + margin: 0; +} + +.field-name { + -moz-hyphens: manual; + -ms-hyphens: manual; + -webkit-hyphens: manual; + hyphens: manual; +} + +/* -- hlist styles ---------------------------------------------------------- */ + +table.hlist { + margin: 1em 0; +} + +table.hlist td { + vertical-align: top; +} + +/* -- object description styles --------------------------------------------- */ + +.sig { + font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; +} + +.sig-name, code.descname { + background-color: transparent; + font-weight: bold; +} + +.sig-name { + font-size: 1.1em; +} + +code.descname { + font-size: 1.2em; +} + +.sig-prename, code.descclassname { + background-color: transparent; +} + +.optional { + font-size: 1.3em; +} + +.sig-paren { + font-size: larger; +} + +.sig-param.n { + font-style: italic; +} + +/* C++ specific styling */ + +.sig-inline.c-texpr, +.sig-inline.cpp-texpr { + font-family: unset; +} + +.sig.c .k, .sig.c .kt, +.sig.cpp .k, .sig.cpp .kt { + color: #0033B3; +} + +.sig.c .m, +.sig.cpp .m { + color: #1750EB; +} + +.sig.c .s, .sig.c .sc, +.sig.cpp .s, .sig.cpp .sc { + color: #067D17; +} + + +/* -- other body styles ----------------------------------------------------- */ + +ol.arabic { + list-style: decimal; +} + +ol.loweralpha { + list-style: lower-alpha; +} + +ol.upperalpha { + list-style: upper-alpha; +} + +ol.lowerroman { + list-style: lower-roman; +} + +ol.upperroman { + list-style: upper-roman; +} + +:not(li) > ol > li:first-child > :first-child, +:not(li) > ul > li:first-child > :first-child { + margin-top: 0px; +} + +:not(li) > ol > li:last-child > :last-child, +:not(li) > ul > li:last-child > :last-child { + margin-bottom: 0px; +} + +ol.simple ol p, +ol.simple ul p, +ul.simple ol p, +ul.simple ul p { + margin-top: 0; +} + +ol.simple > li:not(:first-child) > p, +ul.simple > li:not(:first-child) > p { + margin-top: 0; +} + +ol.simple p, +ul.simple p { + margin-bottom: 0; +} + +aside.footnote > span, +div.citation > span { + float: left; +} +aside.footnote > span:last-of-type, +div.citation > span:last-of-type { + padding-right: 0.5em; +} +aside.footnote > p { + margin-left: 2em; +} +div.citation > p { + margin-left: 4em; +} +aside.footnote > p:last-of-type, +div.citation > p:last-of-type { + margin-bottom: 0em; +} +aside.footnote > p:last-of-type:after, +div.citation > p:last-of-type:after { + content: ""; + clear: both; +} + +dl.field-list { + display: grid; + grid-template-columns: fit-content(30%) auto; +} + +dl.field-list > dt { + font-weight: bold; + word-break: break-word; + padding-left: 0.5em; + padding-right: 5px; +} + +dl.field-list > dd { + padding-left: 0.5em; + margin-top: 0em; + margin-left: 0em; + margin-bottom: 0em; +} + +dl { + margin-bottom: 15px; +} + +dd > :first-child { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +.sig dd { + margin-top: 0px; + margin-bottom: 0px; +} + +.sig dl { + margin-top: 0px; + margin-bottom: 0px; +} + +dl > dd:last-child, +dl > dd:last-child > :last-child { + margin-bottom: 0; +} + +dt:target, span.highlighted { + background-color: #fbe54e; +} + +rect.highlighted { + fill: #fbe54e; +} + +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; +} + +.versionmodified { + font-style: italic; +} + +.system-message { + background-color: #fda; + padding: 5px; + border: 3px solid red; +} + +.footnote:target { + background-color: #ffa; +} + +.line-block { + display: block; + margin-top: 1em; + margin-bottom: 1em; +} + +.line-block .line-block { + margin-top: 0; + margin-bottom: 0; + margin-left: 1.5em; +} + +.guilabel, .menuselection { + font-family: sans-serif; +} + +.accelerator { + text-decoration: underline; +} + +.classifier { + font-style: oblique; +} + +.classifier:before { + font-style: normal; + margin: 0 0.5em; + content: ":"; + display: inline-block; +} + +abbr, acronym { + border-bottom: dotted 1px; + cursor: help; +} + +.translated { + background-color: rgba(207, 255, 207, 0.2) +} + +.untranslated { + background-color: rgba(255, 207, 207, 0.2) +} + +/* -- code displays --------------------------------------------------------- */ + +pre { + overflow: auto; + overflow-y: hidden; /* fixes display issues on Chrome browsers */ +} + +pre, div[class*="highlight-"] { + clear: both; +} + +span.pre { + -moz-hyphens: none; + -ms-hyphens: none; + -webkit-hyphens: none; + hyphens: none; + white-space: nowrap; +} + +div[class*="highlight-"] { + margin: 1em 0; +} + +td.linenos pre { + border: 0; + background-color: transparent; + color: #aaa; +} + +table.highlighttable { + display: block; +} + +table.highlighttable tbody { + display: block; +} + +table.highlighttable tr { + display: flex; +} + +table.highlighttable td { + margin: 0; + padding: 0; +} + +table.highlighttable td.linenos { + padding-right: 0.5em; +} + +table.highlighttable td.code { + flex: 1; + overflow: hidden; +} + +.highlight .hll { + display: block; +} + +div.highlight pre, +table.highlighttable pre { + margin: 0; +} + +div.code-block-caption + div { + margin-top: 0; +} + +div.code-block-caption { + margin-top: 1em; + padding: 2px 5px; + font-size: small; +} + +div.code-block-caption code { + background-color: transparent; +} + +table.highlighttable td.linenos, +span.linenos, +div.highlight span.gp { /* gp: Generic.Prompt */ + user-select: none; + -webkit-user-select: text; /* Safari fallback only */ + -webkit-user-select: none; /* Chrome/Safari */ + -moz-user-select: none; /* Firefox */ + -ms-user-select: none; /* IE10+ */ +} + +div.code-block-caption span.caption-number { + padding: 0.1em 0.3em; + font-style: italic; +} + +div.code-block-caption span.caption-text { +} + +div.literal-block-wrapper { + margin: 1em 0; +} + +code.xref, a code { + background-color: transparent; + font-weight: bold; +} + +h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { + background-color: transparent; +} + +.viewcode-link { + float: right; +} + +.viewcode-back { + float: right; + font-family: sans-serif; +} + +div.viewcode-block:target { + margin: -1px -10px; + padding: 0 10px; +} + +/* -- math display ---------------------------------------------------------- */ + +img.math { + vertical-align: middle; +} + +div.body div.math p { + text-align: center; +} + +span.eqno { + float: right; +} + +span.eqno a.headerlink { + position: absolute; + z-index: 1; +} + +div.math:hover a.headerlink { + visibility: visible; +} + +/* -- printout stylesheet --------------------------------------------------- */ + +@media print { + div.document, + div.documentwrapper, + div.bodywrapper { + margin: 0 !important; + width: 100%; + } + + div.sphinxsidebar, + div.related, + div.footer, + #top-link { + display: none; + } +} \ No newline at end of file diff --git a/_static/binder_badge_logo.svg b/_static/binder_badge_logo.svg new file mode 100644 index 00000000..327f6b63 --- /dev/null +++ b/_static/binder_badge_logo.svg @@ -0,0 +1 @@ + launchlaunchbinderbinder \ No newline at end of file diff --git a/_static/broken_example.png b/_static/broken_example.png new file mode 100644 index 00000000..4fea24e7 Binary files /dev/null and b/_static/broken_example.png differ diff --git a/_static/css/badge_only.css b/_static/css/badge_only.css new file mode 100644 index 00000000..88ba55b9 --- /dev/null +++ b/_static/css/badge_only.css @@ -0,0 +1 @@ +.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}@font-face{font-family:FontAwesome;font-style:normal;font-weight:400;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#FontAwesome) format("svg")}.fa:before{font-family:FontAwesome;font-style:normal;font-weight:400;line-height:1}.fa:before,a .fa{text-decoration:inherit}.fa:before,a .fa,li .fa{display:inline-block}li .fa-large:before{width:1.875em}ul.fas{list-style-type:none;margin-left:2em;text-indent:-.8em}ul.fas li .fa{width:.8em}ul.fas li .fa-large:before{vertical-align:baseline}.fa-book:before,.icon-book:before{content:"\f02d"}.fa-caret-down:before,.icon-caret-down:before{content:"\f0d7"}.fa-caret-up:before,.icon-caret-up:before{content:"\f0d8"}.fa-caret-left:before,.icon-caret-left:before{content:"\f0d9"}.fa-caret-right:before,.icon-caret-right:before{content:"\f0da"}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60}.rst-versions .rst-current-version:after{clear:both;content:"";display:block}.rst-versions .rst-current-version .fa{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions .rst-other-versions .rtd-current-item{font-weight:700}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}#flyout-search-form{padding:6px} \ No newline at end of file diff --git a/_static/css/fonts/Roboto-Slab-Bold.woff b/_static/css/fonts/Roboto-Slab-Bold.woff new file mode 100644 index 00000000..6cb60000 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Bold.woff differ diff --git a/_static/css/fonts/Roboto-Slab-Bold.woff2 b/_static/css/fonts/Roboto-Slab-Bold.woff2 new file mode 100644 index 00000000..7059e231 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Bold.woff2 differ diff --git a/_static/css/fonts/Roboto-Slab-Regular.woff b/_static/css/fonts/Roboto-Slab-Regular.woff new file mode 100644 index 00000000..f815f63f Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Regular.woff differ diff --git a/_static/css/fonts/Roboto-Slab-Regular.woff2 b/_static/css/fonts/Roboto-Slab-Regular.woff2 new file mode 100644 index 00000000..f2c76e5b Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Regular.woff2 differ diff --git a/_static/css/fonts/fontawesome-webfont.eot b/_static/css/fonts/fontawesome-webfont.eot new file mode 100644 index 00000000..e9f60ca9 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.eot differ diff --git a/_static/css/fonts/fontawesome-webfont.svg b/_static/css/fonts/fontawesome-webfont.svg new file mode 100644 index 00000000..855c845e --- /dev/null +++ b/_static/css/fonts/fontawesome-webfont.svg @@ -0,0 +1,2671 @@ + + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/_static/css/fonts/fontawesome-webfont.ttf b/_static/css/fonts/fontawesome-webfont.ttf new file mode 100644 index 00000000..35acda2f Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.ttf differ diff --git a/_static/css/fonts/fontawesome-webfont.woff b/_static/css/fonts/fontawesome-webfont.woff new file mode 100644 index 00000000..400014a4 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.woff differ diff --git a/_static/css/fonts/fontawesome-webfont.woff2 b/_static/css/fonts/fontawesome-webfont.woff2 new file mode 100644 index 00000000..4d13fc60 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.woff2 differ diff --git a/_static/css/fonts/lato-bold-italic.woff b/_static/css/fonts/lato-bold-italic.woff new file mode 100644 index 00000000..88ad05b9 Binary files /dev/null and b/_static/css/fonts/lato-bold-italic.woff differ diff --git a/_static/css/fonts/lato-bold-italic.woff2 b/_static/css/fonts/lato-bold-italic.woff2 new file mode 100644 index 00000000..c4e3d804 Binary files /dev/null and b/_static/css/fonts/lato-bold-italic.woff2 differ diff --git a/_static/css/fonts/lato-bold.woff b/_static/css/fonts/lato-bold.woff new file mode 100644 index 00000000..c6dff51f Binary files /dev/null and b/_static/css/fonts/lato-bold.woff differ diff --git a/_static/css/fonts/lato-bold.woff2 b/_static/css/fonts/lato-bold.woff2 new file mode 100644 index 00000000..bb195043 Binary files /dev/null and b/_static/css/fonts/lato-bold.woff2 differ diff --git a/_static/css/fonts/lato-normal-italic.woff b/_static/css/fonts/lato-normal-italic.woff new file mode 100644 index 00000000..76114bc0 Binary files /dev/null and b/_static/css/fonts/lato-normal-italic.woff differ diff --git a/_static/css/fonts/lato-normal-italic.woff2 b/_static/css/fonts/lato-normal-italic.woff2 new file mode 100644 index 00000000..3404f37e Binary files /dev/null and b/_static/css/fonts/lato-normal-italic.woff2 differ diff --git a/_static/css/fonts/lato-normal.woff b/_static/css/fonts/lato-normal.woff new file mode 100644 index 00000000..ae1307ff Binary files /dev/null and b/_static/css/fonts/lato-normal.woff differ diff --git a/_static/css/fonts/lato-normal.woff2 b/_static/css/fonts/lato-normal.woff2 new file mode 100644 index 00000000..3bf98433 Binary files /dev/null and b/_static/css/fonts/lato-normal.woff2 differ diff --git a/_static/css/theme.css b/_static/css/theme.css new file mode 100644 index 00000000..6843d97b --- /dev/null +++ b/_static/css/theme.css @@ -0,0 +1,4 @@ +html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}[hidden],audio:not([controls]){display:none}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}body{margin:0}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}blockquote{margin:0}dfn{font-style:italic}ins{background:#ff9;text-decoration:none}ins,mark{color:#000}mark{background:#ff0;font-style:italic;font-weight:700}.rst-content code,.rst-content tt,code,kbd,pre,samp{font-family:monospace,serif;_font-family:courier new,monospace;font-size:1em}pre{white-space:pre}q{quotes:none}q:after,q:before{content:"";content:none}small{font-size:85%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}dl,ol,ul{margin:0;padding:0;list-style:none;list-style-image:none}li{list-style:none}dd{margin:0}img{border:0;-ms-interpolation-mode:bicubic;vertical-align:middle;max-width:100%}svg:not(:root){overflow:hidden}figure,form{margin:0}label{cursor:pointer}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,input[type=button],input[type=reset],input[type=submit]{cursor:pointer;-webkit-appearance:button;*overflow:visible}button[disabled],input[disabled]{cursor:default}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}textarea{resize:vertical}table{border-collapse:collapse;border-spacing:0}td{vertical-align:top}.chromeframe{margin:.2em 0;background:#ccc;color:#000;padding:.2em 0}.ir{display:block;border:0;text-indent:-999em;overflow:hidden;background-color:transparent;background-repeat:no-repeat;text-align:left;direction:ltr;*line-height:0}.ir br{display:none}.hidden{display:none!important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.relative{position:relative}big,small{font-size:100%}@media print{body,html,section{background:none!important}*{box-shadow:none!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}a,a:visited{text-decoration:underline}.ir a:after,a[href^="#"]:after,a[href^="javascript:"]:after{content:""}blockquote,pre{page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}.rst-content .toctree-wrapper>p.caption,h2,h3,p{orphans:3;widows:3}.rst-content .toctree-wrapper>p.caption,h2,h3{page-break-after:avoid}}.btn,.fa:before,.icon:before,.rst-content .admonition,.rst-content .admonition-title:before,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .code-block-caption .headerlink:before,.rst-content .danger,.rst-content .eqno .headerlink:before,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-alert,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before,input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week],select,textarea{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:FontAwesome;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713);src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix&v=4.7.0) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#fontawesomeregular) format("svg");font-weight:400;font-style:normal}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14286em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14286em;width:2.14286em;top:.14286em;text-align:center}.fa-li.fa-lg{left:-1.85714em}.fa-border{padding:.2em .25em .15em;border:.08em solid #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa-pull-left.icon,.fa.fa-pull-left,.rst-content .code-block-caption .fa-pull-left.headerlink,.rst-content .eqno .fa-pull-left.headerlink,.rst-content .fa-pull-left.admonition-title,.rst-content code.download span.fa-pull-left:first-child,.rst-content dl dt .fa-pull-left.headerlink,.rst-content h1 .fa-pull-left.headerlink,.rst-content h2 .fa-pull-left.headerlink,.rst-content h3 .fa-pull-left.headerlink,.rst-content h4 .fa-pull-left.headerlink,.rst-content h5 .fa-pull-left.headerlink,.rst-content h6 .fa-pull-left.headerlink,.rst-content p .fa-pull-left.headerlink,.rst-content table>caption .fa-pull-left.headerlink,.rst-content tt.download span.fa-pull-left:first-child,.wy-menu-vertical li.current>a button.fa-pull-left.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-left.toctree-expand,.wy-menu-vertical li button.fa-pull-left.toctree-expand{margin-right:.3em}.fa-pull-right.icon,.fa.fa-pull-right,.rst-content .code-block-caption .fa-pull-right.headerlink,.rst-content .eqno .fa-pull-right.headerlink,.rst-content .fa-pull-right.admonition-title,.rst-content code.download span.fa-pull-right:first-child,.rst-content dl dt .fa-pull-right.headerlink,.rst-content h1 .fa-pull-right.headerlink,.rst-content h2 .fa-pull-right.headerlink,.rst-content h3 .fa-pull-right.headerlink,.rst-content h4 .fa-pull-right.headerlink,.rst-content h5 .fa-pull-right.headerlink,.rst-content h6 .fa-pull-right.headerlink,.rst-content p .fa-pull-right.headerlink,.rst-content table>caption .fa-pull-right.headerlink,.rst-content tt.download span.fa-pull-right:first-child,.wy-menu-vertical li.current>a button.fa-pull-right.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-right.toctree-expand,.wy-menu-vertical li button.fa-pull-right.toctree-expand{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left,.pull-left.icon,.rst-content .code-block-caption .pull-left.headerlink,.rst-content .eqno .pull-left.headerlink,.rst-content .pull-left.admonition-title,.rst-content code.download span.pull-left:first-child,.rst-content dl dt .pull-left.headerlink,.rst-content h1 .pull-left.headerlink,.rst-content h2 .pull-left.headerlink,.rst-content h3 .pull-left.headerlink,.rst-content h4 .pull-left.headerlink,.rst-content h5 .pull-left.headerlink,.rst-content h6 .pull-left.headerlink,.rst-content p .pull-left.headerlink,.rst-content table>caption .pull-left.headerlink,.rst-content tt.download span.pull-left:first-child,.wy-menu-vertical li.current>a button.pull-left.toctree-expand,.wy-menu-vertical li.on a button.pull-left.toctree-expand,.wy-menu-vertical li button.pull-left.toctree-expand{margin-right:.3em}.fa.pull-right,.pull-right.icon,.rst-content .code-block-caption .pull-right.headerlink,.rst-content .eqno .pull-right.headerlink,.rst-content .pull-right.admonition-title,.rst-content code.download span.pull-right:first-child,.rst-content dl dt .pull-right.headerlink,.rst-content h1 .pull-right.headerlink,.rst-content h2 .pull-right.headerlink,.rst-content h3 .pull-right.headerlink,.rst-content h4 .pull-right.headerlink,.rst-content h5 .pull-right.headerlink,.rst-content h6 .pull-right.headerlink,.rst-content p .pull-right.headerlink,.rst-content table>caption .pull-right.headerlink,.rst-content tt.download span.pull-right:first-child,.wy-menu-vertical li.current>a button.pull-right.toctree-expand,.wy-menu-vertical li.on a button.pull-right.toctree-expand,.wy-menu-vertical li button.pull-right.toctree-expand{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);-ms-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scaleY(-1);-ms-transform:scaleY(-1);transform:scaleY(-1)}:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:""}.fa-music:before{content:""}.fa-search:before,.icon-search:before{content:""}.fa-envelope-o:before{content:""}.fa-heart:before{content:""}.fa-star:before{content:""}.fa-star-o:before{content:""}.fa-user:before{content:""}.fa-film:before{content:""}.fa-th-large:before{content:""}.fa-th:before{content:""}.fa-th-list:before{content:""}.fa-check:before{content:""}.fa-close:before,.fa-remove:before,.fa-times:before{content:""}.fa-search-plus:before{content:""}.fa-search-minus:before{content:""}.fa-power-off:before{content:""}.fa-signal:before{content:""}.fa-cog:before,.fa-gear:before{content:""}.fa-trash-o:before{content:""}.fa-home:before,.icon-home:before{content:""}.fa-file-o:before{content:""}.fa-clock-o:before{content:""}.fa-road:before{content:""}.fa-download:before,.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{content:""}.fa-arrow-circle-o-down:before{content:""}.fa-arrow-circle-o-up:before{content:""}.fa-inbox:before{content:""}.fa-play-circle-o:before{content:""}.fa-repeat:before,.fa-rotate-right:before{content:""}.fa-refresh:before{content:""}.fa-list-alt:before{content:""}.fa-lock:before{content:""}.fa-flag:before{content:""}.fa-headphones:before{content:""}.fa-volume-off:before{content:""}.fa-volume-down:before{content:""}.fa-volume-up:before{content:""}.fa-qrcode:before{content:""}.fa-barcode:before{content:""}.fa-tag:before{content:""}.fa-tags:before{content:""}.fa-book:before,.icon-book:before{content:""}.fa-bookmark:before{content:""}.fa-print:before{content:""}.fa-camera:before{content:""}.fa-font:before{content:""}.fa-bold:before{content:""}.fa-italic:before{content:""}.fa-text-height:before{content:""}.fa-text-width:before{content:""}.fa-align-left:before{content:""}.fa-align-center:before{content:""}.fa-align-right:before{content:""}.fa-align-justify:before{content:""}.fa-list:before{content:""}.fa-dedent:before,.fa-outdent:before{content:""}.fa-indent:before{content:""}.fa-video-camera:before{content:""}.fa-image:before,.fa-photo:before,.fa-picture-o:before{content:""}.fa-pencil:before{content:""}.fa-map-marker:before{content:""}.fa-adjust:before{content:""}.fa-tint:before{content:""}.fa-edit:before,.fa-pencil-square-o:before{content:""}.fa-share-square-o:before{content:""}.fa-check-square-o:before{content:""}.fa-arrows:before{content:""}.fa-step-backward:before{content:""}.fa-fast-backward:before{content:""}.fa-backward:before{content:""}.fa-play:before{content:""}.fa-pause:before{content:""}.fa-stop:before{content:""}.fa-forward:before{content:""}.fa-fast-forward:before{content:""}.fa-step-forward:before{content:""}.fa-eject:before{content:""}.fa-chevron-left:before{content:""}.fa-chevron-right:before{content:""}.fa-plus-circle:before{content:""}.fa-minus-circle:before{content:""}.fa-times-circle:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before{content:""}.fa-check-circle:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before{content:""}.fa-question-circle:before{content:""}.fa-info-circle:before{content:""}.fa-crosshairs:before{content:""}.fa-times-circle-o:before{content:""}.fa-check-circle-o:before{content:""}.fa-ban:before{content:""}.fa-arrow-left:before{content:""}.fa-arrow-right:before{content:""}.fa-arrow-up:before{content:""}.fa-arrow-down:before{content:""}.fa-mail-forward:before,.fa-share:before{content:""}.fa-expand:before{content:""}.fa-compress:before{content:""}.fa-plus:before{content:""}.fa-minus:before{content:""}.fa-asterisk:before{content:""}.fa-exclamation-circle:before,.rst-content .admonition-title:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before{content:""}.fa-gift:before{content:""}.fa-leaf:before{content:""}.fa-fire:before,.icon-fire:before{content:""}.fa-eye:before{content:""}.fa-eye-slash:before{content:""}.fa-exclamation-triangle:before,.fa-warning:before{content:""}.fa-plane:before{content:""}.fa-calendar:before{content:""}.fa-random:before{content:""}.fa-comment:before{content:""}.fa-magnet:before{content:""}.fa-chevron-up:before{content:""}.fa-chevron-down:before{content:""}.fa-retweet:before{content:""}.fa-shopping-cart:before{content:""}.fa-folder:before{content:""}.fa-folder-open:before{content:""}.fa-arrows-v:before{content:""}.fa-arrows-h:before{content:""}.fa-bar-chart-o:before,.fa-bar-chart:before{content:""}.fa-twitter-square:before{content:""}.fa-facebook-square:before{content:""}.fa-camera-retro:before{content:""}.fa-key:before{content:""}.fa-cogs:before,.fa-gears:before{content:""}.fa-comments:before{content:""}.fa-thumbs-o-up:before{content:""}.fa-thumbs-o-down:before{content:""}.fa-star-half:before{content:""}.fa-heart-o:before{content:""}.fa-sign-out:before{content:""}.fa-linkedin-square:before{content:""}.fa-thumb-tack:before{content:""}.fa-external-link:before{content:""}.fa-sign-in:before{content:""}.fa-trophy:before{content:""}.fa-github-square:before{content:""}.fa-upload:before{content:""}.fa-lemon-o:before{content:""}.fa-phone:before{content:""}.fa-square-o:before{content:""}.fa-bookmark-o:before{content:""}.fa-phone-square:before{content:""}.fa-twitter:before{content:""}.fa-facebook-f:before,.fa-facebook:before{content:""}.fa-github:before,.icon-github:before{content:""}.fa-unlock:before{content:""}.fa-credit-card:before{content:""}.fa-feed:before,.fa-rss:before{content:""}.fa-hdd-o:before{content:""}.fa-bullhorn:before{content:""}.fa-bell:before{content:""}.fa-certificate:before{content:""}.fa-hand-o-right:before{content:""}.fa-hand-o-left:before{content:""}.fa-hand-o-up:before{content:""}.fa-hand-o-down:before{content:""}.fa-arrow-circle-left:before,.icon-circle-arrow-left:before{content:""}.fa-arrow-circle-right:before,.icon-circle-arrow-right:before{content:""}.fa-arrow-circle-up:before{content:""}.fa-arrow-circle-down:before{content:""}.fa-globe:before{content:""}.fa-wrench:before{content:""}.fa-tasks:before{content:""}.fa-filter:before{content:""}.fa-briefcase:before{content:""}.fa-arrows-alt:before{content:""}.fa-group:before,.fa-users:before{content:""}.fa-chain:before,.fa-link:before,.icon-link:before{content:""}.fa-cloud:before{content:""}.fa-flask:before{content:""}.fa-cut:before,.fa-scissors:before{content:""}.fa-copy:before,.fa-files-o:before{content:""}.fa-paperclip:before{content:""}.fa-floppy-o:before,.fa-save:before{content:""}.fa-square:before{content:""}.fa-bars:before,.fa-navicon:before,.fa-reorder:before{content:""}.fa-list-ul:before{content:""}.fa-list-ol:before{content:""}.fa-strikethrough:before{content:""}.fa-underline:before{content:""}.fa-table:before{content:""}.fa-magic:before{content:""}.fa-truck:before{content:""}.fa-pinterest:before{content:""}.fa-pinterest-square:before{content:""}.fa-google-plus-square:before{content:""}.fa-google-plus:before{content:""}.fa-money:before{content:""}.fa-caret-down:before,.icon-caret-down:before,.wy-dropdown .caret:before{content:""}.fa-caret-up:before{content:""}.fa-caret-left:before{content:""}.fa-caret-right:before{content:""}.fa-columns:before{content:""}.fa-sort:before,.fa-unsorted:before{content:""}.fa-sort-desc:before,.fa-sort-down:before{content:""}.fa-sort-asc:before,.fa-sort-up:before{content:""}.fa-envelope:before{content:""}.fa-linkedin:before{content:""}.fa-rotate-left:before,.fa-undo:before{content:""}.fa-gavel:before,.fa-legal:before{content:""}.fa-dashboard:before,.fa-tachometer:before{content:""}.fa-comment-o:before{content:""}.fa-comments-o:before{content:""}.fa-bolt:before,.fa-flash:before{content:""}.fa-sitemap:before{content:""}.fa-umbrella:before{content:""}.fa-clipboard:before,.fa-paste:before{content:""}.fa-lightbulb-o:before{content:""}.fa-exchange:before{content:""}.fa-cloud-download:before{content:""}.fa-cloud-upload:before{content:""}.fa-user-md:before{content:""}.fa-stethoscope:before{content:""}.fa-suitcase:before{content:""}.fa-bell-o:before{content:""}.fa-coffee:before{content:""}.fa-cutlery:before{content:""}.fa-file-text-o:before{content:""}.fa-building-o:before{content:""}.fa-hospital-o:before{content:""}.fa-ambulance:before{content:""}.fa-medkit:before{content:""}.fa-fighter-jet:before{content:""}.fa-beer:before{content:""}.fa-h-square:before{content:""}.fa-plus-square:before{content:""}.fa-angle-double-left:before{content:""}.fa-angle-double-right:before{content:""}.fa-angle-double-up:before{content:""}.fa-angle-double-down:before{content:""}.fa-angle-left:before{content:""}.fa-angle-right:before{content:""}.fa-angle-up:before{content:""}.fa-angle-down:before{content:""}.fa-desktop:before{content:""}.fa-laptop:before{content:""}.fa-tablet:before{content:""}.fa-mobile-phone:before,.fa-mobile:before{content:""}.fa-circle-o:before{content:""}.fa-quote-left:before{content:""}.fa-quote-right:before{content:""}.fa-spinner:before{content:""}.fa-circle:before{content:""}.fa-mail-reply:before,.fa-reply:before{content:""}.fa-github-alt:before{content:""}.fa-folder-o:before{content:""}.fa-folder-open-o:before{content:""}.fa-smile-o:before{content:""}.fa-frown-o:before{content:""}.fa-meh-o:before{content:""}.fa-gamepad:before{content:""}.fa-keyboard-o:before{content:""}.fa-flag-o:before{content:""}.fa-flag-checkered:before{content:""}.fa-terminal:before{content:""}.fa-code:before{content:""}.fa-mail-reply-all:before,.fa-reply-all:before{content:""}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:""}.fa-location-arrow:before{content:""}.fa-crop:before{content:""}.fa-code-fork:before{content:""}.fa-chain-broken:before,.fa-unlink:before{content:""}.fa-question:before{content:""}.fa-info:before{content:""}.fa-exclamation:before{content:""}.fa-superscript:before{content:""}.fa-subscript:before{content:""}.fa-eraser:before{content:""}.fa-puzzle-piece:before{content:""}.fa-microphone:before{content:""}.fa-microphone-slash:before{content:""}.fa-shield:before{content:""}.fa-calendar-o:before{content:""}.fa-fire-extinguisher:before{content:""}.fa-rocket:before{content:""}.fa-maxcdn:before{content:""}.fa-chevron-circle-left:before{content:""}.fa-chevron-circle-right:before{content:""}.fa-chevron-circle-up:before{content:""}.fa-chevron-circle-down:before{content:""}.fa-html5:before{content:""}.fa-css3:before{content:""}.fa-anchor:before{content:""}.fa-unlock-alt:before{content:""}.fa-bullseye:before{content:""}.fa-ellipsis-h:before{content:""}.fa-ellipsis-v:before{content:""}.fa-rss-square:before{content:""}.fa-play-circle:before{content:""}.fa-ticket:before{content:""}.fa-minus-square:before{content:""}.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before{content:""}.fa-level-up:before{content:""}.fa-level-down:before{content:""}.fa-check-square:before{content:""}.fa-pencil-square:before{content:""}.fa-external-link-square:before{content:""}.fa-share-square:before{content:""}.fa-compass:before{content:""}.fa-caret-square-o-down:before,.fa-toggle-down:before{content:""}.fa-caret-square-o-up:before,.fa-toggle-up:before{content:""}.fa-caret-square-o-right:before,.fa-toggle-right:before{content:""}.fa-eur:before,.fa-euro:before{content:""}.fa-gbp:before{content:""}.fa-dollar:before,.fa-usd:before{content:""}.fa-inr:before,.fa-rupee:before{content:""}.fa-cny:before,.fa-jpy:before,.fa-rmb:before,.fa-yen:before{content:""}.fa-rouble:before,.fa-rub:before,.fa-ruble:before{content:""}.fa-krw:before,.fa-won:before{content:""}.fa-bitcoin:before,.fa-btc:before{content:""}.fa-file:before{content:""}.fa-file-text:before{content:""}.fa-sort-alpha-asc:before{content:""}.fa-sort-alpha-desc:before{content:""}.fa-sort-amount-asc:before{content:""}.fa-sort-amount-desc:before{content:""}.fa-sort-numeric-asc:before{content:""}.fa-sort-numeric-desc:before{content:""}.fa-thumbs-up:before{content:""}.fa-thumbs-down:before{content:""}.fa-youtube-square:before{content:""}.fa-youtube:before{content:""}.fa-xing:before{content:""}.fa-xing-square:before{content:""}.fa-youtube-play:before{content:""}.fa-dropbox:before{content:""}.fa-stack-overflow:before{content:""}.fa-instagram:before{content:""}.fa-flickr:before{content:""}.fa-adn:before{content:""}.fa-bitbucket:before,.icon-bitbucket:before{content:""}.fa-bitbucket-square:before{content:""}.fa-tumblr:before{content:""}.fa-tumblr-square:before{content:""}.fa-long-arrow-down:before{content:""}.fa-long-arrow-up:before{content:""}.fa-long-arrow-left:before{content:""}.fa-long-arrow-right:before{content:""}.fa-apple:before{content:""}.fa-windows:before{content:""}.fa-android:before{content:""}.fa-linux:before{content:""}.fa-dribbble:before{content:""}.fa-skype:before{content:""}.fa-foursquare:before{content:""}.fa-trello:before{content:""}.fa-female:before{content:""}.fa-male:before{content:""}.fa-gittip:before,.fa-gratipay:before{content:""}.fa-sun-o:before{content:""}.fa-moon-o:before{content:""}.fa-archive:before{content:""}.fa-bug:before{content:""}.fa-vk:before{content:""}.fa-weibo:before{content:""}.fa-renren:before{content:""}.fa-pagelines:before{content:""}.fa-stack-exchange:before{content:""}.fa-arrow-circle-o-right:before{content:""}.fa-arrow-circle-o-left:before{content:""}.fa-caret-square-o-left:before,.fa-toggle-left:before{content:""}.fa-dot-circle-o:before{content:""}.fa-wheelchair:before{content:""}.fa-vimeo-square:before{content:""}.fa-try:before,.fa-turkish-lira:before{content:""}.fa-plus-square-o:before,.wy-menu-vertical li button.toctree-expand:before{content:""}.fa-space-shuttle:before{content:""}.fa-slack:before{content:""}.fa-envelope-square:before{content:""}.fa-wordpress:before{content:""}.fa-openid:before{content:""}.fa-bank:before,.fa-institution:before,.fa-university:before{content:""}.fa-graduation-cap:before,.fa-mortar-board:before{content:""}.fa-yahoo:before{content:""}.fa-google:before{content:""}.fa-reddit:before{content:""}.fa-reddit-square:before{content:""}.fa-stumbleupon-circle:before{content:""}.fa-stumbleupon:before{content:""}.fa-delicious:before{content:""}.fa-digg:before{content:""}.fa-pied-piper-pp:before{content:""}.fa-pied-piper-alt:before{content:""}.fa-drupal:before{content:""}.fa-joomla:before{content:""}.fa-language:before{content:""}.fa-fax:before{content:""}.fa-building:before{content:""}.fa-child:before{content:""}.fa-paw:before{content:""}.fa-spoon:before{content:""}.fa-cube:before{content:""}.fa-cubes:before{content:""}.fa-behance:before{content:""}.fa-behance-square:before{content:""}.fa-steam:before{content:""}.fa-steam-square:before{content:""}.fa-recycle:before{content:""}.fa-automobile:before,.fa-car:before{content:""}.fa-cab:before,.fa-taxi:before{content:""}.fa-tree:before{content:""}.fa-spotify:before{content:""}.fa-deviantart:before{content:""}.fa-soundcloud:before{content:""}.fa-database:before{content:""}.fa-file-pdf-o:before{content:""}.fa-file-word-o:before{content:""}.fa-file-excel-o:before{content:""}.fa-file-powerpoint-o:before{content:""}.fa-file-image-o:before,.fa-file-photo-o:before,.fa-file-picture-o:before{content:""}.fa-file-archive-o:before,.fa-file-zip-o:before{content:""}.fa-file-audio-o:before,.fa-file-sound-o:before{content:""}.fa-file-movie-o:before,.fa-file-video-o:before{content:""}.fa-file-code-o:before{content:""}.fa-vine:before{content:""}.fa-codepen:before{content:""}.fa-jsfiddle:before{content:""}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-ring:before,.fa-life-saver:before,.fa-support:before{content:""}.fa-circle-o-notch:before{content:""}.fa-ra:before,.fa-rebel:before,.fa-resistance:before{content:""}.fa-empire:before,.fa-ge:before{content:""}.fa-git-square:before{content:""}.fa-git:before{content:""}.fa-hacker-news:before,.fa-y-combinator-square:before,.fa-yc-square:before{content:""}.fa-tencent-weibo:before{content:""}.fa-qq:before{content:""}.fa-wechat:before,.fa-weixin:before{content:""}.fa-paper-plane:before,.fa-send:before{content:""}.fa-paper-plane-o:before,.fa-send-o:before{content:""}.fa-history:before{content:""}.fa-circle-thin:before{content:""}.fa-header:before{content:""}.fa-paragraph:before{content:""}.fa-sliders:before{content:""}.fa-share-alt:before{content:""}.fa-share-alt-square:before{content:""}.fa-bomb:before{content:""}.fa-futbol-o:before,.fa-soccer-ball-o:before{content:""}.fa-tty:before{content:""}.fa-binoculars:before{content:""}.fa-plug:before{content:""}.fa-slideshare:before{content:""}.fa-twitch:before{content:""}.fa-yelp:before{content:""}.fa-newspaper-o:before{content:""}.fa-wifi:before{content:""}.fa-calculator:before{content:""}.fa-paypal:before{content:""}.fa-google-wallet:before{content:""}.fa-cc-visa:before{content:""}.fa-cc-mastercard:before{content:""}.fa-cc-discover:before{content:""}.fa-cc-amex:before{content:""}.fa-cc-paypal:before{content:""}.fa-cc-stripe:before{content:""}.fa-bell-slash:before{content:""}.fa-bell-slash-o:before{content:""}.fa-trash:before{content:""}.fa-copyright:before{content:""}.fa-at:before{content:""}.fa-eyedropper:before{content:""}.fa-paint-brush:before{content:""}.fa-birthday-cake:before{content:""}.fa-area-chart:before{content:""}.fa-pie-chart:before{content:""}.fa-line-chart:before{content:""}.fa-lastfm:before{content:""}.fa-lastfm-square:before{content:""}.fa-toggle-off:before{content:""}.fa-toggle-on:before{content:""}.fa-bicycle:before{content:""}.fa-bus:before{content:""}.fa-ioxhost:before{content:""}.fa-angellist:before{content:""}.fa-cc:before{content:""}.fa-ils:before,.fa-shekel:before,.fa-sheqel:before{content:""}.fa-meanpath:before{content:""}.fa-buysellads:before{content:""}.fa-connectdevelop:before{content:""}.fa-dashcube:before{content:""}.fa-forumbee:before{content:""}.fa-leanpub:before{content:""}.fa-sellsy:before{content:""}.fa-shirtsinbulk:before{content:""}.fa-simplybuilt:before{content:""}.fa-skyatlas:before{content:""}.fa-cart-plus:before{content:""}.fa-cart-arrow-down:before{content:""}.fa-diamond:before{content:""}.fa-ship:before{content:""}.fa-user-secret:before{content:""}.fa-motorcycle:before{content:""}.fa-street-view:before{content:""}.fa-heartbeat:before{content:""}.fa-venus:before{content:""}.fa-mars:before{content:""}.fa-mercury:before{content:""}.fa-intersex:before,.fa-transgender:before{content:""}.fa-transgender-alt:before{content:""}.fa-venus-double:before{content:""}.fa-mars-double:before{content:""}.fa-venus-mars:before{content:""}.fa-mars-stroke:before{content:""}.fa-mars-stroke-v:before{content:""}.fa-mars-stroke-h:before{content:""}.fa-neuter:before{content:""}.fa-genderless:before{content:""}.fa-facebook-official:before{content:""}.fa-pinterest-p:before{content:""}.fa-whatsapp:before{content:""}.fa-server:before{content:""}.fa-user-plus:before{content:""}.fa-user-times:before{content:""}.fa-bed:before,.fa-hotel:before{content:""}.fa-viacoin:before{content:""}.fa-train:before{content:""}.fa-subway:before{content:""}.fa-medium:before{content:""}.fa-y-combinator:before,.fa-yc:before{content:""}.fa-optin-monster:before{content:""}.fa-opencart:before{content:""}.fa-expeditedssl:before{content:""}.fa-battery-4:before,.fa-battery-full:before,.fa-battery:before{content:""}.fa-battery-3:before,.fa-battery-three-quarters:before{content:""}.fa-battery-2:before,.fa-battery-half:before{content:""}.fa-battery-1:before,.fa-battery-quarter:before{content:""}.fa-battery-0:before,.fa-battery-empty:before{content:""}.fa-mouse-pointer:before{content:""}.fa-i-cursor:before{content:""}.fa-object-group:before{content:""}.fa-object-ungroup:before{content:""}.fa-sticky-note:before{content:""}.fa-sticky-note-o:before{content:""}.fa-cc-jcb:before{content:""}.fa-cc-diners-club:before{content:""}.fa-clone:before{content:""}.fa-balance-scale:before{content:""}.fa-hourglass-o:before{content:""}.fa-hourglass-1:before,.fa-hourglass-start:before{content:""}.fa-hourglass-2:before,.fa-hourglass-half:before{content:""}.fa-hourglass-3:before,.fa-hourglass-end:before{content:""}.fa-hourglass:before{content:""}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:""}.fa-hand-paper-o:before,.fa-hand-stop-o:before{content:""}.fa-hand-scissors-o:before{content:""}.fa-hand-lizard-o:before{content:""}.fa-hand-spock-o:before{content:""}.fa-hand-pointer-o:before{content:""}.fa-hand-peace-o:before{content:""}.fa-trademark:before{content:""}.fa-registered:before{content:""}.fa-creative-commons:before{content:""}.fa-gg:before{content:""}.fa-gg-circle:before{content:""}.fa-tripadvisor:before{content:""}.fa-odnoklassniki:before{content:""}.fa-odnoklassniki-square:before{content:""}.fa-get-pocket:before{content:""}.fa-wikipedia-w:before{content:""}.fa-safari:before{content:""}.fa-chrome:before{content:""}.fa-firefox:before{content:""}.fa-opera:before{content:""}.fa-internet-explorer:before{content:""}.fa-television:before,.fa-tv:before{content:""}.fa-contao:before{content:""}.fa-500px:before{content:""}.fa-amazon:before{content:""}.fa-calendar-plus-o:before{content:""}.fa-calendar-minus-o:before{content:""}.fa-calendar-times-o:before{content:""}.fa-calendar-check-o:before{content:""}.fa-industry:before{content:""}.fa-map-pin:before{content:""}.fa-map-signs:before{content:""}.fa-map-o:before{content:""}.fa-map:before{content:""}.fa-commenting:before{content:""}.fa-commenting-o:before{content:""}.fa-houzz:before{content:""}.fa-vimeo:before{content:""}.fa-black-tie:before{content:""}.fa-fonticons:before{content:""}.fa-reddit-alien:before{content:""}.fa-edge:before{content:""}.fa-credit-card-alt:before{content:""}.fa-codiepie:before{content:""}.fa-modx:before{content:""}.fa-fort-awesome:before{content:""}.fa-usb:before{content:""}.fa-product-hunt:before{content:""}.fa-mixcloud:before{content:""}.fa-scribd:before{content:""}.fa-pause-circle:before{content:""}.fa-pause-circle-o:before{content:""}.fa-stop-circle:before{content:""}.fa-stop-circle-o:before{content:""}.fa-shopping-bag:before{content:""}.fa-shopping-basket:before{content:""}.fa-hashtag:before{content:""}.fa-bluetooth:before{content:""}.fa-bluetooth-b:before{content:""}.fa-percent:before{content:""}.fa-gitlab:before,.icon-gitlab:before{content:""}.fa-wpbeginner:before{content:""}.fa-wpforms:before{content:""}.fa-envira:before{content:""}.fa-universal-access:before{content:""}.fa-wheelchair-alt:before{content:""}.fa-question-circle-o:before{content:""}.fa-blind:before{content:""}.fa-audio-description:before{content:""}.fa-volume-control-phone:before{content:""}.fa-braille:before{content:""}.fa-assistive-listening-systems:before{content:""}.fa-american-sign-language-interpreting:before,.fa-asl-interpreting:before{content:""}.fa-deaf:before,.fa-deafness:before,.fa-hard-of-hearing:before{content:""}.fa-glide:before{content:""}.fa-glide-g:before{content:""}.fa-sign-language:before,.fa-signing:before{content:""}.fa-low-vision:before{content:""}.fa-viadeo:before{content:""}.fa-viadeo-square:before{content:""}.fa-snapchat:before{content:""}.fa-snapchat-ghost:before{content:""}.fa-snapchat-square:before{content:""}.fa-pied-piper:before{content:""}.fa-first-order:before{content:""}.fa-yoast:before{content:""}.fa-themeisle:before{content:""}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:""}.fa-fa:before,.fa-font-awesome:before{content:""}.fa-handshake-o:before{content:""}.fa-envelope-open:before{content:""}.fa-envelope-open-o:before{content:""}.fa-linode:before{content:""}.fa-address-book:before{content:""}.fa-address-book-o:before{content:""}.fa-address-card:before,.fa-vcard:before{content:""}.fa-address-card-o:before,.fa-vcard-o:before{content:""}.fa-user-circle:before{content:""}.fa-user-circle-o:before{content:""}.fa-user-o:before{content:""}.fa-id-badge:before{content:""}.fa-drivers-license:before,.fa-id-card:before{content:""}.fa-drivers-license-o:before,.fa-id-card-o:before{content:""}.fa-quora:before{content:""}.fa-free-code-camp:before{content:""}.fa-telegram:before{content:""}.fa-thermometer-4:before,.fa-thermometer-full:before,.fa-thermometer:before{content:""}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:""}.fa-thermometer-2:before,.fa-thermometer-half:before{content:""}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:""}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:""}.fa-shower:before{content:""}.fa-bath:before,.fa-bathtub:before,.fa-s15:before{content:""}.fa-podcast:before{content:""}.fa-window-maximize:before{content:""}.fa-window-minimize:before{content:""}.fa-window-restore:before{content:""}.fa-times-rectangle:before,.fa-window-close:before{content:""}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:""}.fa-bandcamp:before{content:""}.fa-grav:before{content:""}.fa-etsy:before{content:""}.fa-imdb:before{content:""}.fa-ravelry:before{content:""}.fa-eercast:before{content:""}.fa-microchip:before{content:""}.fa-snowflake-o:before{content:""}.fa-superpowers:before{content:""}.fa-wpexplorer:before{content:""}.fa-meetup:before{content:""}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-dropdown .caret,.wy-inline-validate.wy-inline-validate-danger .wy-input-context,.wy-inline-validate.wy-inline-validate-info .wy-input-context,.wy-inline-validate.wy-inline-validate-success .wy-input-context,.wy-inline-validate.wy-inline-validate-warning .wy-input-context,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{font-family:inherit}.fa:before,.icon:before,.rst-content .admonition-title:before,.rst-content .code-block-caption .headerlink:before,.rst-content .eqno .headerlink:before,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before{font-family:FontAwesome;display:inline-block;font-style:normal;font-weight:400;line-height:1;text-decoration:inherit}.rst-content .code-block-caption a .headerlink,.rst-content .eqno a .headerlink,.rst-content a .admonition-title,.rst-content code.download a span:first-child,.rst-content dl dt a .headerlink,.rst-content h1 a .headerlink,.rst-content h2 a .headerlink,.rst-content h3 a .headerlink,.rst-content h4 a .headerlink,.rst-content h5 a .headerlink,.rst-content h6 a .headerlink,.rst-content p.caption a .headerlink,.rst-content p a .headerlink,.rst-content table>caption a .headerlink,.rst-content tt.download a span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li a button.toctree-expand,a .fa,a .icon,a .rst-content .admonition-title,a .rst-content .code-block-caption .headerlink,a .rst-content .eqno .headerlink,a .rst-content code.download span:first-child,a .rst-content dl dt .headerlink,a .rst-content h1 .headerlink,a .rst-content h2 .headerlink,a .rst-content h3 .headerlink,a .rst-content h4 .headerlink,a .rst-content h5 .headerlink,a .rst-content h6 .headerlink,a .rst-content p.caption .headerlink,a .rst-content p .headerlink,a .rst-content table>caption .headerlink,a .rst-content tt.download span:first-child,a .wy-menu-vertical li button.toctree-expand{display:inline-block;text-decoration:inherit}.btn .fa,.btn .icon,.btn .rst-content .admonition-title,.btn .rst-content .code-block-caption .headerlink,.btn .rst-content .eqno .headerlink,.btn .rst-content code.download span:first-child,.btn .rst-content dl dt .headerlink,.btn .rst-content h1 .headerlink,.btn .rst-content h2 .headerlink,.btn .rst-content h3 .headerlink,.btn .rst-content h4 .headerlink,.btn .rst-content h5 .headerlink,.btn .rst-content h6 .headerlink,.btn .rst-content p .headerlink,.btn .rst-content table>caption .headerlink,.btn .rst-content tt.download span:first-child,.btn .wy-menu-vertical li.current>a button.toctree-expand,.btn .wy-menu-vertical li.on a button.toctree-expand,.btn .wy-menu-vertical li button.toctree-expand,.nav .fa,.nav .icon,.nav .rst-content .admonition-title,.nav .rst-content .code-block-caption .headerlink,.nav .rst-content .eqno .headerlink,.nav .rst-content code.download span:first-child,.nav .rst-content dl dt .headerlink,.nav .rst-content h1 .headerlink,.nav .rst-content h2 .headerlink,.nav .rst-content h3 .headerlink,.nav .rst-content h4 .headerlink,.nav .rst-content h5 .headerlink,.nav .rst-content h6 .headerlink,.nav .rst-content p .headerlink,.nav .rst-content table>caption .headerlink,.nav .rst-content tt.download span:first-child,.nav .wy-menu-vertical li.current>a button.toctree-expand,.nav .wy-menu-vertical li.on a button.toctree-expand,.nav .wy-menu-vertical li button.toctree-expand,.rst-content .btn .admonition-title,.rst-content .code-block-caption .btn .headerlink,.rst-content .code-block-caption .nav .headerlink,.rst-content .eqno .btn .headerlink,.rst-content .eqno .nav .headerlink,.rst-content .nav .admonition-title,.rst-content code.download .btn span:first-child,.rst-content code.download .nav span:first-child,.rst-content dl dt .btn .headerlink,.rst-content dl dt .nav .headerlink,.rst-content h1 .btn .headerlink,.rst-content h1 .nav .headerlink,.rst-content h2 .btn .headerlink,.rst-content h2 .nav .headerlink,.rst-content h3 .btn .headerlink,.rst-content h3 .nav .headerlink,.rst-content h4 .btn .headerlink,.rst-content h4 .nav .headerlink,.rst-content h5 .btn .headerlink,.rst-content h5 .nav .headerlink,.rst-content h6 .btn .headerlink,.rst-content h6 .nav .headerlink,.rst-content p .btn .headerlink,.rst-content p .nav .headerlink,.rst-content table>caption .btn .headerlink,.rst-content table>caption .nav .headerlink,.rst-content tt.download .btn span:first-child,.rst-content tt.download .nav span:first-child,.wy-menu-vertical li .btn button.toctree-expand,.wy-menu-vertical li.current>a .btn button.toctree-expand,.wy-menu-vertical li.current>a .nav button.toctree-expand,.wy-menu-vertical li .nav button.toctree-expand,.wy-menu-vertical li.on a .btn button.toctree-expand,.wy-menu-vertical li.on a .nav button.toctree-expand{display:inline}.btn .fa-large.icon,.btn .fa.fa-large,.btn .rst-content .code-block-caption .fa-large.headerlink,.btn .rst-content .eqno .fa-large.headerlink,.btn .rst-content .fa-large.admonition-title,.btn .rst-content code.download span.fa-large:first-child,.btn .rst-content dl dt .fa-large.headerlink,.btn .rst-content h1 .fa-large.headerlink,.btn .rst-content h2 .fa-large.headerlink,.btn .rst-content h3 .fa-large.headerlink,.btn .rst-content h4 .fa-large.headerlink,.btn .rst-content h5 .fa-large.headerlink,.btn .rst-content h6 .fa-large.headerlink,.btn .rst-content p .fa-large.headerlink,.btn .rst-content table>caption .fa-large.headerlink,.btn .rst-content tt.download span.fa-large:first-child,.btn .wy-menu-vertical li button.fa-large.toctree-expand,.nav .fa-large.icon,.nav .fa.fa-large,.nav .rst-content .code-block-caption .fa-large.headerlink,.nav .rst-content .eqno .fa-large.headerlink,.nav .rst-content .fa-large.admonition-title,.nav .rst-content code.download span.fa-large:first-child,.nav .rst-content dl dt .fa-large.headerlink,.nav .rst-content h1 .fa-large.headerlink,.nav .rst-content h2 .fa-large.headerlink,.nav .rst-content h3 .fa-large.headerlink,.nav .rst-content h4 .fa-large.headerlink,.nav .rst-content h5 .fa-large.headerlink,.nav .rst-content h6 .fa-large.headerlink,.nav .rst-content p .fa-large.headerlink,.nav .rst-content table>caption .fa-large.headerlink,.nav .rst-content tt.download span.fa-large:first-child,.nav .wy-menu-vertical li button.fa-large.toctree-expand,.rst-content .btn .fa-large.admonition-title,.rst-content .code-block-caption .btn .fa-large.headerlink,.rst-content .code-block-caption .nav .fa-large.headerlink,.rst-content .eqno .btn .fa-large.headerlink,.rst-content .eqno .nav .fa-large.headerlink,.rst-content .nav .fa-large.admonition-title,.rst-content code.download .btn span.fa-large:first-child,.rst-content code.download .nav span.fa-large:first-child,.rst-content dl dt .btn .fa-large.headerlink,.rst-content dl dt .nav .fa-large.headerlink,.rst-content h1 .btn .fa-large.headerlink,.rst-content h1 .nav .fa-large.headerlink,.rst-content h2 .btn .fa-large.headerlink,.rst-content h2 .nav .fa-large.headerlink,.rst-content h3 .btn .fa-large.headerlink,.rst-content h3 .nav .fa-large.headerlink,.rst-content h4 .btn .fa-large.headerlink,.rst-content h4 .nav .fa-large.headerlink,.rst-content h5 .btn .fa-large.headerlink,.rst-content h5 .nav .fa-large.headerlink,.rst-content h6 .btn .fa-large.headerlink,.rst-content h6 .nav .fa-large.headerlink,.rst-content p .btn .fa-large.headerlink,.rst-content p .nav .fa-large.headerlink,.rst-content table>caption .btn .fa-large.headerlink,.rst-content table>caption .nav .fa-large.headerlink,.rst-content tt.download .btn span.fa-large:first-child,.rst-content tt.download .nav span.fa-large:first-child,.wy-menu-vertical li .btn button.fa-large.toctree-expand,.wy-menu-vertical li .nav button.fa-large.toctree-expand{line-height:.9em}.btn .fa-spin.icon,.btn .fa.fa-spin,.btn .rst-content .code-block-caption .fa-spin.headerlink,.btn .rst-content .eqno .fa-spin.headerlink,.btn .rst-content .fa-spin.admonition-title,.btn .rst-content code.download span.fa-spin:first-child,.btn .rst-content dl dt .fa-spin.headerlink,.btn .rst-content h1 .fa-spin.headerlink,.btn .rst-content h2 .fa-spin.headerlink,.btn .rst-content h3 .fa-spin.headerlink,.btn .rst-content h4 .fa-spin.headerlink,.btn .rst-content h5 .fa-spin.headerlink,.btn .rst-content h6 .fa-spin.headerlink,.btn .rst-content p .fa-spin.headerlink,.btn .rst-content table>caption .fa-spin.headerlink,.btn .rst-content tt.download span.fa-spin:first-child,.btn .wy-menu-vertical li button.fa-spin.toctree-expand,.nav .fa-spin.icon,.nav .fa.fa-spin,.nav .rst-content .code-block-caption .fa-spin.headerlink,.nav .rst-content .eqno .fa-spin.headerlink,.nav .rst-content .fa-spin.admonition-title,.nav .rst-content code.download span.fa-spin:first-child,.nav .rst-content dl dt .fa-spin.headerlink,.nav .rst-content h1 .fa-spin.headerlink,.nav .rst-content h2 .fa-spin.headerlink,.nav .rst-content h3 .fa-spin.headerlink,.nav .rst-content h4 .fa-spin.headerlink,.nav .rst-content h5 .fa-spin.headerlink,.nav .rst-content h6 .fa-spin.headerlink,.nav .rst-content p .fa-spin.headerlink,.nav .rst-content table>caption .fa-spin.headerlink,.nav .rst-content tt.download span.fa-spin:first-child,.nav .wy-menu-vertical li button.fa-spin.toctree-expand,.rst-content .btn .fa-spin.admonition-title,.rst-content .code-block-caption .btn .fa-spin.headerlink,.rst-content .code-block-caption .nav .fa-spin.headerlink,.rst-content .eqno .btn .fa-spin.headerlink,.rst-content .eqno .nav .fa-spin.headerlink,.rst-content .nav .fa-spin.admonition-title,.rst-content code.download .btn span.fa-spin:first-child,.rst-content code.download .nav span.fa-spin:first-child,.rst-content dl dt .btn .fa-spin.headerlink,.rst-content dl dt .nav .fa-spin.headerlink,.rst-content h1 .btn .fa-spin.headerlink,.rst-content h1 .nav .fa-spin.headerlink,.rst-content h2 .btn .fa-spin.headerlink,.rst-content h2 .nav .fa-spin.headerlink,.rst-content h3 .btn .fa-spin.headerlink,.rst-content h3 .nav .fa-spin.headerlink,.rst-content h4 .btn .fa-spin.headerlink,.rst-content h4 .nav .fa-spin.headerlink,.rst-content h5 .btn .fa-spin.headerlink,.rst-content h5 .nav .fa-spin.headerlink,.rst-content h6 .btn .fa-spin.headerlink,.rst-content h6 .nav .fa-spin.headerlink,.rst-content p .btn .fa-spin.headerlink,.rst-content p .nav .fa-spin.headerlink,.rst-content table>caption .btn .fa-spin.headerlink,.rst-content table>caption .nav .fa-spin.headerlink,.rst-content tt.download .btn span.fa-spin:first-child,.rst-content tt.download .nav span.fa-spin:first-child,.wy-menu-vertical li .btn button.fa-spin.toctree-expand,.wy-menu-vertical li .nav button.fa-spin.toctree-expand{display:inline-block}.btn.fa:before,.btn.icon:before,.rst-content .btn.admonition-title:before,.rst-content .code-block-caption .btn.headerlink:before,.rst-content .eqno .btn.headerlink:before,.rst-content code.download span.btn:first-child:before,.rst-content dl dt .btn.headerlink:before,.rst-content h1 .btn.headerlink:before,.rst-content h2 .btn.headerlink:before,.rst-content h3 .btn.headerlink:before,.rst-content h4 .btn.headerlink:before,.rst-content h5 .btn.headerlink:before,.rst-content h6 .btn.headerlink:before,.rst-content p .btn.headerlink:before,.rst-content table>caption .btn.headerlink:before,.rst-content tt.download span.btn:first-child:before,.wy-menu-vertical li button.btn.toctree-expand:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}.btn.fa:hover:before,.btn.icon:hover:before,.rst-content .btn.admonition-title:hover:before,.rst-content .code-block-caption .btn.headerlink:hover:before,.rst-content .eqno .btn.headerlink:hover:before,.rst-content code.download span.btn:first-child:hover:before,.rst-content dl dt .btn.headerlink:hover:before,.rst-content h1 .btn.headerlink:hover:before,.rst-content h2 .btn.headerlink:hover:before,.rst-content h3 .btn.headerlink:hover:before,.rst-content h4 .btn.headerlink:hover:before,.rst-content h5 .btn.headerlink:hover:before,.rst-content h6 .btn.headerlink:hover:before,.rst-content p .btn.headerlink:hover:before,.rst-content table>caption .btn.headerlink:hover:before,.rst-content tt.download span.btn:first-child:hover:before,.wy-menu-vertical li button.btn.toctree-expand:hover:before{opacity:1}.btn-mini .fa:before,.btn-mini .icon:before,.btn-mini .rst-content .admonition-title:before,.btn-mini .rst-content .code-block-caption .headerlink:before,.btn-mini .rst-content .eqno .headerlink:before,.btn-mini .rst-content code.download span:first-child:before,.btn-mini .rst-content dl dt .headerlink:before,.btn-mini .rst-content h1 .headerlink:before,.btn-mini .rst-content h2 .headerlink:before,.btn-mini .rst-content h3 .headerlink:before,.btn-mini .rst-content h4 .headerlink:before,.btn-mini .rst-content h5 .headerlink:before,.btn-mini .rst-content h6 .headerlink:before,.btn-mini .rst-content p .headerlink:before,.btn-mini .rst-content table>caption .headerlink:before,.btn-mini .rst-content tt.download span:first-child:before,.btn-mini .wy-menu-vertical li button.toctree-expand:before,.rst-content .btn-mini .admonition-title:before,.rst-content .code-block-caption .btn-mini .headerlink:before,.rst-content .eqno .btn-mini .headerlink:before,.rst-content code.download .btn-mini span:first-child:before,.rst-content dl dt .btn-mini .headerlink:before,.rst-content h1 .btn-mini .headerlink:before,.rst-content h2 .btn-mini .headerlink:before,.rst-content h3 .btn-mini .headerlink:before,.rst-content h4 .btn-mini .headerlink:before,.rst-content h5 .btn-mini .headerlink:before,.rst-content h6 .btn-mini .headerlink:before,.rst-content p .btn-mini .headerlink:before,.rst-content table>caption .btn-mini .headerlink:before,.rst-content tt.download .btn-mini span:first-child:before,.wy-menu-vertical li .btn-mini button.toctree-expand:before{font-size:14px;vertical-align:-15%}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.wy-alert{padding:12px;line-height:24px;margin-bottom:24px;background:#e7f2fa}.rst-content .admonition-title,.wy-alert-title{font-weight:700;display:block;color:#fff;background:#6ab0de;padding:6px 12px;margin:-12px -12px 12px}.rst-content .danger,.rst-content .error,.rst-content .wy-alert-danger.admonition,.rst-content .wy-alert-danger.admonition-todo,.rst-content .wy-alert-danger.attention,.rst-content .wy-alert-danger.caution,.rst-content .wy-alert-danger.hint,.rst-content .wy-alert-danger.important,.rst-content .wy-alert-danger.note,.rst-content .wy-alert-danger.seealso,.rst-content .wy-alert-danger.tip,.rst-content .wy-alert-danger.warning,.wy-alert.wy-alert-danger{background:#fdf3f2}.rst-content .danger .admonition-title,.rst-content .danger .wy-alert-title,.rst-content .error .admonition-title,.rst-content .error .wy-alert-title,.rst-content .wy-alert-danger.admonition-todo .admonition-title,.rst-content .wy-alert-danger.admonition-todo .wy-alert-title,.rst-content .wy-alert-danger.admonition .admonition-title,.rst-content .wy-alert-danger.admonition .wy-alert-title,.rst-content .wy-alert-danger.attention .admonition-title,.rst-content .wy-alert-danger.attention .wy-alert-title,.rst-content .wy-alert-danger.caution .admonition-title,.rst-content .wy-alert-danger.caution .wy-alert-title,.rst-content .wy-alert-danger.hint .admonition-title,.rst-content .wy-alert-danger.hint .wy-alert-title,.rst-content .wy-alert-danger.important .admonition-title,.rst-content .wy-alert-danger.important .wy-alert-title,.rst-content .wy-alert-danger.note .admonition-title,.rst-content .wy-alert-danger.note .wy-alert-title,.rst-content .wy-alert-danger.seealso .admonition-title,.rst-content .wy-alert-danger.seealso .wy-alert-title,.rst-content .wy-alert-danger.tip .admonition-title,.rst-content .wy-alert-danger.tip .wy-alert-title,.rst-content .wy-alert-danger.warning .admonition-title,.rst-content .wy-alert-danger.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-danger .admonition-title,.wy-alert.wy-alert-danger .rst-content .admonition-title,.wy-alert.wy-alert-danger .wy-alert-title{background:#f29f97}.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .warning,.rst-content .wy-alert-warning.admonition,.rst-content .wy-alert-warning.danger,.rst-content .wy-alert-warning.error,.rst-content .wy-alert-warning.hint,.rst-content .wy-alert-warning.important,.rst-content .wy-alert-warning.note,.rst-content .wy-alert-warning.seealso,.rst-content .wy-alert-warning.tip,.wy-alert.wy-alert-warning{background:#ffedcc}.rst-content .admonition-todo .admonition-title,.rst-content .admonition-todo .wy-alert-title,.rst-content .attention .admonition-title,.rst-content .attention .wy-alert-title,.rst-content .caution .admonition-title,.rst-content .caution .wy-alert-title,.rst-content .warning .admonition-title,.rst-content .warning .wy-alert-title,.rst-content .wy-alert-warning.admonition .admonition-title,.rst-content .wy-alert-warning.admonition .wy-alert-title,.rst-content .wy-alert-warning.danger .admonition-title,.rst-content .wy-alert-warning.danger .wy-alert-title,.rst-content .wy-alert-warning.error .admonition-title,.rst-content .wy-alert-warning.error .wy-alert-title,.rst-content .wy-alert-warning.hint .admonition-title,.rst-content .wy-alert-warning.hint .wy-alert-title,.rst-content .wy-alert-warning.important .admonition-title,.rst-content .wy-alert-warning.important .wy-alert-title,.rst-content .wy-alert-warning.note .admonition-title,.rst-content .wy-alert-warning.note .wy-alert-title,.rst-content .wy-alert-warning.seealso .admonition-title,.rst-content .wy-alert-warning.seealso .wy-alert-title,.rst-content .wy-alert-warning.tip .admonition-title,.rst-content .wy-alert-warning.tip .wy-alert-title,.rst-content .wy-alert.wy-alert-warning .admonition-title,.wy-alert.wy-alert-warning .rst-content .admonition-title,.wy-alert.wy-alert-warning .wy-alert-title{background:#f0b37e}.rst-content .note,.rst-content .seealso,.rst-content .wy-alert-info.admonition,.rst-content .wy-alert-info.admonition-todo,.rst-content .wy-alert-info.attention,.rst-content .wy-alert-info.caution,.rst-content .wy-alert-info.danger,.rst-content .wy-alert-info.error,.rst-content .wy-alert-info.hint,.rst-content .wy-alert-info.important,.rst-content .wy-alert-info.tip,.rst-content .wy-alert-info.warning,.wy-alert.wy-alert-info{background:#e7f2fa}.rst-content .note .admonition-title,.rst-content .note .wy-alert-title,.rst-content .seealso .admonition-title,.rst-content .seealso .wy-alert-title,.rst-content .wy-alert-info.admonition-todo .admonition-title,.rst-content .wy-alert-info.admonition-todo .wy-alert-title,.rst-content .wy-alert-info.admonition .admonition-title,.rst-content .wy-alert-info.admonition .wy-alert-title,.rst-content .wy-alert-info.attention .admonition-title,.rst-content .wy-alert-info.attention .wy-alert-title,.rst-content .wy-alert-info.caution .admonition-title,.rst-content .wy-alert-info.caution .wy-alert-title,.rst-content .wy-alert-info.danger .admonition-title,.rst-content .wy-alert-info.danger .wy-alert-title,.rst-content .wy-alert-info.error .admonition-title,.rst-content .wy-alert-info.error .wy-alert-title,.rst-content .wy-alert-info.hint .admonition-title,.rst-content .wy-alert-info.hint .wy-alert-title,.rst-content .wy-alert-info.important .admonition-title,.rst-content .wy-alert-info.important .wy-alert-title,.rst-content .wy-alert-info.tip .admonition-title,.rst-content .wy-alert-info.tip .wy-alert-title,.rst-content .wy-alert-info.warning .admonition-title,.rst-content .wy-alert-info.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-info .admonition-title,.wy-alert.wy-alert-info .rst-content .admonition-title,.wy-alert.wy-alert-info .wy-alert-title{background:#6ab0de}.rst-content .hint,.rst-content .important,.rst-content .tip,.rst-content .wy-alert-success.admonition,.rst-content .wy-alert-success.admonition-todo,.rst-content .wy-alert-success.attention,.rst-content .wy-alert-success.caution,.rst-content .wy-alert-success.danger,.rst-content .wy-alert-success.error,.rst-content .wy-alert-success.note,.rst-content .wy-alert-success.seealso,.rst-content .wy-alert-success.warning,.wy-alert.wy-alert-success{background:#dbfaf4}.rst-content .hint .admonition-title,.rst-content .hint .wy-alert-title,.rst-content .important .admonition-title,.rst-content .important .wy-alert-title,.rst-content .tip .admonition-title,.rst-content .tip .wy-alert-title,.rst-content .wy-alert-success.admonition-todo .admonition-title,.rst-content .wy-alert-success.admonition-todo .wy-alert-title,.rst-content .wy-alert-success.admonition .admonition-title,.rst-content .wy-alert-success.admonition .wy-alert-title,.rst-content .wy-alert-success.attention .admonition-title,.rst-content .wy-alert-success.attention .wy-alert-title,.rst-content .wy-alert-success.caution .admonition-title,.rst-content .wy-alert-success.caution .wy-alert-title,.rst-content .wy-alert-success.danger .admonition-title,.rst-content .wy-alert-success.danger .wy-alert-title,.rst-content .wy-alert-success.error .admonition-title,.rst-content .wy-alert-success.error .wy-alert-title,.rst-content .wy-alert-success.note .admonition-title,.rst-content .wy-alert-success.note .wy-alert-title,.rst-content .wy-alert-success.seealso .admonition-title,.rst-content .wy-alert-success.seealso .wy-alert-title,.rst-content .wy-alert-success.warning .admonition-title,.rst-content .wy-alert-success.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-success .admonition-title,.wy-alert.wy-alert-success .rst-content .admonition-title,.wy-alert.wy-alert-success .wy-alert-title{background:#1abc9c}.rst-content .wy-alert-neutral.admonition,.rst-content .wy-alert-neutral.admonition-todo,.rst-content .wy-alert-neutral.attention,.rst-content .wy-alert-neutral.caution,.rst-content .wy-alert-neutral.danger,.rst-content .wy-alert-neutral.error,.rst-content .wy-alert-neutral.hint,.rst-content .wy-alert-neutral.important,.rst-content .wy-alert-neutral.note,.rst-content .wy-alert-neutral.seealso,.rst-content .wy-alert-neutral.tip,.rst-content .wy-alert-neutral.warning,.wy-alert.wy-alert-neutral{background:#f3f6f6}.rst-content .wy-alert-neutral.admonition-todo .admonition-title,.rst-content .wy-alert-neutral.admonition-todo .wy-alert-title,.rst-content .wy-alert-neutral.admonition .admonition-title,.rst-content .wy-alert-neutral.admonition .wy-alert-title,.rst-content .wy-alert-neutral.attention .admonition-title,.rst-content .wy-alert-neutral.attention .wy-alert-title,.rst-content .wy-alert-neutral.caution .admonition-title,.rst-content .wy-alert-neutral.caution .wy-alert-title,.rst-content .wy-alert-neutral.danger .admonition-title,.rst-content .wy-alert-neutral.danger .wy-alert-title,.rst-content .wy-alert-neutral.error .admonition-title,.rst-content .wy-alert-neutral.error .wy-alert-title,.rst-content .wy-alert-neutral.hint .admonition-title,.rst-content .wy-alert-neutral.hint .wy-alert-title,.rst-content .wy-alert-neutral.important .admonition-title,.rst-content .wy-alert-neutral.important .wy-alert-title,.rst-content .wy-alert-neutral.note .admonition-title,.rst-content .wy-alert-neutral.note .wy-alert-title,.rst-content .wy-alert-neutral.seealso .admonition-title,.rst-content .wy-alert-neutral.seealso .wy-alert-title,.rst-content .wy-alert-neutral.tip .admonition-title,.rst-content .wy-alert-neutral.tip .wy-alert-title,.rst-content .wy-alert-neutral.warning .admonition-title,.rst-content .wy-alert-neutral.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-neutral .admonition-title,.wy-alert.wy-alert-neutral .rst-content .admonition-title,.wy-alert.wy-alert-neutral .wy-alert-title{color:#404040;background:#e1e4e5}.rst-content .wy-alert-neutral.admonition-todo a,.rst-content .wy-alert-neutral.admonition a,.rst-content .wy-alert-neutral.attention a,.rst-content .wy-alert-neutral.caution a,.rst-content .wy-alert-neutral.danger a,.rst-content .wy-alert-neutral.error a,.rst-content .wy-alert-neutral.hint a,.rst-content .wy-alert-neutral.important a,.rst-content .wy-alert-neutral.note a,.rst-content .wy-alert-neutral.seealso a,.rst-content .wy-alert-neutral.tip a,.rst-content .wy-alert-neutral.warning a,.wy-alert.wy-alert-neutral a{color:#2980b9}.rst-content .admonition-todo p:last-child,.rst-content .admonition p:last-child,.rst-content .attention p:last-child,.rst-content .caution p:last-child,.rst-content .danger p:last-child,.rst-content .error p:last-child,.rst-content .hint p:last-child,.rst-content .important p:last-child,.rst-content .note p:last-child,.rst-content .seealso p:last-child,.rst-content .tip p:last-child,.rst-content .warning p:last-child,.wy-alert p:last-child{margin-bottom:0}.wy-tray-container{position:fixed;bottom:0;left:0;z-index:600}.wy-tray-container li{display:block;width:300px;background:transparent;color:#fff;text-align:center;box-shadow:0 5px 5px 0 rgba(0,0,0,.1);padding:0 24px;min-width:20%;opacity:0;height:0;line-height:56px;overflow:hidden;-webkit-transition:all .3s ease-in;-moz-transition:all .3s ease-in;transition:all .3s ease-in}.wy-tray-container li.wy-tray-item-success{background:#27ae60}.wy-tray-container li.wy-tray-item-info{background:#2980b9}.wy-tray-container li.wy-tray-item-warning{background:#e67e22}.wy-tray-container li.wy-tray-item-danger{background:#e74c3c}.wy-tray-container li.on{opacity:1;height:56px}@media screen and (max-width:768px){.wy-tray-container{bottom:auto;top:0;width:100%}.wy-tray-container li{width:100%}}button{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle;cursor:pointer;line-height:normal;-webkit-appearance:button;*overflow:visible}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}button[disabled]{cursor:default}.btn{display:inline-block;border-radius:2px;line-height:normal;white-space:nowrap;text-align:center;cursor:pointer;font-size:100%;padding:6px 12px 8px;color:#fff;border:1px solid rgba(0,0,0,.1);background-color:#27ae60;text-decoration:none;font-weight:400;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 2px -1px hsla(0,0%,100%,.5),inset 0 -2px 0 0 rgba(0,0,0,.1);outline-none:false;vertical-align:middle;*display:inline;zoom:1;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-transition:all .1s linear;-moz-transition:all .1s linear;transition:all .1s linear}.btn-hover{background:#2e8ece;color:#fff}.btn:hover{background:#2cc36b;color:#fff}.btn:focus{background:#2cc36b;outline:0}.btn:active{box-shadow:inset 0 -1px 0 0 rgba(0,0,0,.05),inset 0 2px 0 0 rgba(0,0,0,.1);padding:8px 12px 6px}.btn:visited{color:#fff}.btn-disabled,.btn-disabled:active,.btn-disabled:focus,.btn-disabled:hover,.btn:disabled{background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);filter:alpha(opacity=40);opacity:.4;cursor:not-allowed;box-shadow:none}.btn::-moz-focus-inner{padding:0;border:0}.btn-small{font-size:80%}.btn-info{background-color:#2980b9!important}.btn-info:hover{background-color:#2e8ece!important}.btn-neutral{background-color:#f3f6f6!important;color:#404040!important}.btn-neutral:hover{background-color:#e5ebeb!important;color:#404040}.btn-neutral:visited{color:#404040!important}.btn-success{background-color:#27ae60!important}.btn-success:hover{background-color:#295!important}.btn-danger{background-color:#e74c3c!important}.btn-danger:hover{background-color:#ea6153!important}.btn-warning{background-color:#e67e22!important}.btn-warning:hover{background-color:#e98b39!important}.btn-invert{background-color:#222}.btn-invert:hover{background-color:#2f2f2f!important}.btn-link{background-color:transparent!important;color:#2980b9;box-shadow:none;border-color:transparent!important}.btn-link:active,.btn-link:hover{background-color:transparent!important;color:#409ad5!important;box-shadow:none}.btn-link:visited{color:#9b59b6}.wy-btn-group .btn,.wy-control .btn{vertical-align:middle}.wy-btn-group{margin-bottom:24px;*zoom:1}.wy-btn-group:after,.wy-btn-group:before{display:table;content:""}.wy-btn-group:after{clear:both}.wy-dropdown{position:relative;display:inline-block}.wy-dropdown-active .wy-dropdown-menu{display:block}.wy-dropdown-menu{position:absolute;left:0;display:none;float:left;top:100%;min-width:100%;background:#fcfcfc;z-index:100;border:1px solid #cfd7dd;box-shadow:0 2px 2px 0 rgba(0,0,0,.1);padding:12px}.wy-dropdown-menu>dd>a{display:block;clear:both;color:#404040;white-space:nowrap;font-size:90%;padding:0 12px;cursor:pointer}.wy-dropdown-menu>dd>a:hover{background:#2980b9;color:#fff}.wy-dropdown-menu>dd.divider{border-top:1px solid #cfd7dd;margin:6px 0}.wy-dropdown-menu>dd.search{padding-bottom:12px}.wy-dropdown-menu>dd.search input[type=search]{width:100%}.wy-dropdown-menu>dd.call-to-action{background:#e3e3e3;text-transform:uppercase;font-weight:500;font-size:80%}.wy-dropdown-menu>dd.call-to-action:hover{background:#e3e3e3}.wy-dropdown-menu>dd.call-to-action .btn{color:#fff}.wy-dropdown.wy-dropdown-up .wy-dropdown-menu{bottom:100%;top:auto;left:auto;right:0}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu{background:#fcfcfc;margin-top:2px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a{padding:6px 12px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover{background:#2980b9;color:#fff}.wy-dropdown.wy-dropdown-left .wy-dropdown-menu{right:0;left:auto;text-align:right}.wy-dropdown-arrow:before{content:" ";border-bottom:5px solid #f5f5f5;border-left:5px solid transparent;border-right:5px solid transparent;position:absolute;display:block;top:-4px;left:50%;margin-left:-3px}.wy-dropdown-arrow.wy-dropdown-arrow-left:before{left:11px}.wy-form-stacked select{display:block}.wy-form-aligned .wy-help-inline,.wy-form-aligned input,.wy-form-aligned label,.wy-form-aligned select,.wy-form-aligned textarea{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-form-aligned .wy-control-group>label{display:inline-block;vertical-align:middle;width:10em;margin:6px 12px 0 0;float:left}.wy-form-aligned .wy-control{float:left}.wy-form-aligned .wy-control label{display:block}.wy-form-aligned .wy-control select{margin-top:6px}fieldset{margin:0}fieldset,legend{border:0;padding:0}legend{width:100%;white-space:normal;margin-bottom:24px;font-size:150%;*margin-left:-7px}label,legend{display:block}label{margin:0 0 .3125em;color:#333;font-size:90%}input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}.wy-control-group{margin-bottom:24px;max-width:1200px;margin-left:auto;margin-right:auto;*zoom:1}.wy-control-group:after,.wy-control-group:before{display:table;content:""}.wy-control-group:after{clear:both}.wy-control-group.wy-control-group-required>label:after{content:" *";color:#e74c3c}.wy-control-group .wy-form-full,.wy-control-group .wy-form-halves,.wy-control-group .wy-form-thirds{padding-bottom:12px}.wy-control-group .wy-form-full input[type=color],.wy-control-group .wy-form-full input[type=date],.wy-control-group .wy-form-full input[type=datetime-local],.wy-control-group .wy-form-full input[type=datetime],.wy-control-group .wy-form-full input[type=email],.wy-control-group .wy-form-full input[type=month],.wy-control-group .wy-form-full input[type=number],.wy-control-group .wy-form-full input[type=password],.wy-control-group .wy-form-full input[type=search],.wy-control-group .wy-form-full input[type=tel],.wy-control-group .wy-form-full input[type=text],.wy-control-group .wy-form-full input[type=time],.wy-control-group .wy-form-full input[type=url],.wy-control-group .wy-form-full input[type=week],.wy-control-group .wy-form-full select,.wy-control-group .wy-form-halves input[type=color],.wy-control-group .wy-form-halves input[type=date],.wy-control-group .wy-form-halves input[type=datetime-local],.wy-control-group .wy-form-halves input[type=datetime],.wy-control-group .wy-form-halves input[type=email],.wy-control-group .wy-form-halves input[type=month],.wy-control-group .wy-form-halves input[type=number],.wy-control-group .wy-form-halves input[type=password],.wy-control-group .wy-form-halves input[type=search],.wy-control-group .wy-form-halves input[type=tel],.wy-control-group .wy-form-halves input[type=text],.wy-control-group .wy-form-halves input[type=time],.wy-control-group .wy-form-halves input[type=url],.wy-control-group .wy-form-halves input[type=week],.wy-control-group .wy-form-halves select,.wy-control-group .wy-form-thirds input[type=color],.wy-control-group .wy-form-thirds input[type=date],.wy-control-group .wy-form-thirds input[type=datetime-local],.wy-control-group .wy-form-thirds input[type=datetime],.wy-control-group .wy-form-thirds input[type=email],.wy-control-group .wy-form-thirds input[type=month],.wy-control-group .wy-form-thirds input[type=number],.wy-control-group .wy-form-thirds input[type=password],.wy-control-group .wy-form-thirds input[type=search],.wy-control-group .wy-form-thirds input[type=tel],.wy-control-group .wy-form-thirds input[type=text],.wy-control-group .wy-form-thirds input[type=time],.wy-control-group .wy-form-thirds input[type=url],.wy-control-group .wy-form-thirds input[type=week],.wy-control-group .wy-form-thirds select{width:100%}.wy-control-group .wy-form-full{float:left;display:block;width:100%;margin-right:0}.wy-control-group .wy-form-full:last-child{margin-right:0}.wy-control-group .wy-form-halves{float:left;display:block;margin-right:2.35765%;width:48.82117%}.wy-control-group .wy-form-halves:last-child,.wy-control-group .wy-form-halves:nth-of-type(2n){margin-right:0}.wy-control-group .wy-form-halves:nth-of-type(odd){clear:left}.wy-control-group .wy-form-thirds{float:left;display:block;margin-right:2.35765%;width:31.76157%}.wy-control-group .wy-form-thirds:last-child,.wy-control-group .wy-form-thirds:nth-of-type(3n){margin-right:0}.wy-control-group .wy-form-thirds:nth-of-type(3n+1){clear:left}.wy-control-group.wy-control-group-no-input .wy-control,.wy-control-no-input{margin:6px 0 0;font-size:90%}.wy-control-no-input{display:inline-block}.wy-control-group.fluid-input input[type=color],.wy-control-group.fluid-input input[type=date],.wy-control-group.fluid-input input[type=datetime-local],.wy-control-group.fluid-input input[type=datetime],.wy-control-group.fluid-input input[type=email],.wy-control-group.fluid-input input[type=month],.wy-control-group.fluid-input input[type=number],.wy-control-group.fluid-input input[type=password],.wy-control-group.fluid-input input[type=search],.wy-control-group.fluid-input input[type=tel],.wy-control-group.fluid-input input[type=text],.wy-control-group.fluid-input input[type=time],.wy-control-group.fluid-input input[type=url],.wy-control-group.fluid-input input[type=week]{width:100%}.wy-form-message-inline{padding-left:.3em;color:#666;font-size:90%}.wy-form-message{display:block;color:#999;font-size:70%;margin-top:.3125em;font-style:italic}.wy-form-message p{font-size:inherit;font-style:italic;margin-bottom:6px}.wy-form-message p:last-child{margin-bottom:0}input{line-height:normal}input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;*overflow:visible}input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week]{-webkit-appearance:none;padding:6px;display:inline-block;border:1px solid #ccc;font-size:80%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 3px #ddd;border-radius:0;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}input[type=datetime-local]{padding:.34375em .625em}input[disabled]{cursor:default}input[type=checkbox],input[type=radio]{padding:0;margin-right:.3125em;*height:13px;*width:13px}input[type=checkbox],input[type=radio],input[type=search]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}input[type=color]:focus,input[type=date]:focus,input[type=datetime-local]:focus,input[type=datetime]:focus,input[type=email]:focus,input[type=month]:focus,input[type=number]:focus,input[type=password]:focus,input[type=search]:focus,input[type=tel]:focus,input[type=text]:focus,input[type=time]:focus,input[type=url]:focus,input[type=week]:focus{outline:0;outline:thin dotted\9;border-color:#333}input.no-focus:focus{border-color:#ccc!important}input[type=checkbox]:focus,input[type=file]:focus,input[type=radio]:focus{outline:thin dotted #333;outline:1px auto #129fea}input[type=color][disabled],input[type=date][disabled],input[type=datetime-local][disabled],input[type=datetime][disabled],input[type=email][disabled],input[type=month][disabled],input[type=number][disabled],input[type=password][disabled],input[type=search][disabled],input[type=tel][disabled],input[type=text][disabled],input[type=time][disabled],input[type=url][disabled],input[type=week][disabled]{cursor:not-allowed;background-color:#fafafa}input:focus:invalid,select:focus:invalid,textarea:focus:invalid{color:#e74c3c;border:1px solid #e74c3c}input:focus:invalid:focus,select:focus:invalid:focus,textarea:focus:invalid:focus{border-color:#e74c3c}input[type=checkbox]:focus:invalid:focus,input[type=file]:focus:invalid:focus,input[type=radio]:focus:invalid:focus{outline-color:#e74c3c}input.wy-input-large{padding:12px;font-size:100%}textarea{overflow:auto;vertical-align:top;width:100%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif}select,textarea{padding:.5em .625em;display:inline-block;border:1px solid #ccc;font-size:80%;box-shadow:inset 0 1px 3px #ddd;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}select{border:1px solid #ccc;background-color:#fff}select[multiple]{height:auto}select:focus,textarea:focus{outline:0}input[readonly],select[disabled],select[readonly],textarea[disabled],textarea[readonly]{cursor:not-allowed;background-color:#fafafa}input[type=checkbox][disabled],input[type=radio][disabled]{cursor:not-allowed}.wy-checkbox,.wy-radio{margin:6px 0;color:#404040;display:block}.wy-checkbox input,.wy-radio input{vertical-align:baseline}.wy-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-input-prefix,.wy-input-suffix{white-space:nowrap;padding:6px}.wy-input-prefix .wy-input-context,.wy-input-suffix .wy-input-context{line-height:27px;padding:0 8px;display:inline-block;font-size:80%;background-color:#f3f6f6;border:1px solid #ccc;color:#999}.wy-input-suffix .wy-input-context{border-left:0}.wy-input-prefix .wy-input-context{border-right:0}.wy-switch{position:relative;display:block;height:24px;margin-top:12px;cursor:pointer}.wy-switch:before{left:0;top:0;width:36px;height:12px;background:#ccc}.wy-switch:after,.wy-switch:before{position:absolute;content:"";display:block;border-radius:4px;-webkit-transition:all .2s ease-in-out;-moz-transition:all .2s ease-in-out;transition:all .2s ease-in-out}.wy-switch:after{width:18px;height:18px;background:#999;left:-3px;top:-3px}.wy-switch span{position:absolute;left:48px;display:block;font-size:12px;color:#ccc;line-height:1}.wy-switch.active:before{background:#1e8449}.wy-switch.active:after{left:24px;background:#27ae60}.wy-switch.disabled{cursor:not-allowed;opacity:.8}.wy-control-group.wy-control-group-error .wy-form-message,.wy-control-group.wy-control-group-error>label{color:#e74c3c}.wy-control-group.wy-control-group-error input[type=color],.wy-control-group.wy-control-group-error input[type=date],.wy-control-group.wy-control-group-error input[type=datetime-local],.wy-control-group.wy-control-group-error input[type=datetime],.wy-control-group.wy-control-group-error input[type=email],.wy-control-group.wy-control-group-error input[type=month],.wy-control-group.wy-control-group-error input[type=number],.wy-control-group.wy-control-group-error input[type=password],.wy-control-group.wy-control-group-error input[type=search],.wy-control-group.wy-control-group-error input[type=tel],.wy-control-group.wy-control-group-error input[type=text],.wy-control-group.wy-control-group-error input[type=time],.wy-control-group.wy-control-group-error input[type=url],.wy-control-group.wy-control-group-error input[type=week],.wy-control-group.wy-control-group-error textarea{border:1px solid #e74c3c}.wy-inline-validate{white-space:nowrap}.wy-inline-validate .wy-input-context{padding:.5em .625em;display:inline-block;font-size:80%}.wy-inline-validate.wy-inline-validate-success .wy-input-context{color:#27ae60}.wy-inline-validate.wy-inline-validate-danger .wy-input-context{color:#e74c3c}.wy-inline-validate.wy-inline-validate-warning .wy-input-context{color:#e67e22}.wy-inline-validate.wy-inline-validate-info .wy-input-context{color:#2980b9}.rotate-90{-webkit-transform:rotate(90deg);-moz-transform:rotate(90deg);-ms-transform:rotate(90deg);-o-transform:rotate(90deg);transform:rotate(90deg)}.rotate-180{-webkit-transform:rotate(180deg);-moz-transform:rotate(180deg);-ms-transform:rotate(180deg);-o-transform:rotate(180deg);transform:rotate(180deg)}.rotate-270{-webkit-transform:rotate(270deg);-moz-transform:rotate(270deg);-ms-transform:rotate(270deg);-o-transform:rotate(270deg);transform:rotate(270deg)}.mirror{-webkit-transform:scaleX(-1);-moz-transform:scaleX(-1);-ms-transform:scaleX(-1);-o-transform:scaleX(-1);transform:scaleX(-1)}.mirror.rotate-90{-webkit-transform:scaleX(-1) rotate(90deg);-moz-transform:scaleX(-1) rotate(90deg);-ms-transform:scaleX(-1) rotate(90deg);-o-transform:scaleX(-1) rotate(90deg);transform:scaleX(-1) rotate(90deg)}.mirror.rotate-180{-webkit-transform:scaleX(-1) rotate(180deg);-moz-transform:scaleX(-1) rotate(180deg);-ms-transform:scaleX(-1) rotate(180deg);-o-transform:scaleX(-1) rotate(180deg);transform:scaleX(-1) rotate(180deg)}.mirror.rotate-270{-webkit-transform:scaleX(-1) rotate(270deg);-moz-transform:scaleX(-1) rotate(270deg);-ms-transform:scaleX(-1) rotate(270deg);-o-transform:scaleX(-1) rotate(270deg);transform:scaleX(-1) rotate(270deg)}@media only screen and (max-width:480px){.wy-form button[type=submit]{margin:.7em 0 0}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=text],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week],.wy-form label{margin-bottom:.3em;display:block}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week]{margin-bottom:0}.wy-form-aligned .wy-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.wy-form-aligned .wy-control{margin:1.5em 0 0}.wy-form-message,.wy-form-message-inline,.wy-form .wy-help-inline{display:block;font-size:80%;padding:6px 0}}@media screen and (max-width:768px){.tablet-hide{display:none}}@media screen and (max-width:480px){.mobile-hide{display:none}}.float-left{float:left}.float-right{float:right}.full-width{width:100%}.rst-content table.docutils,.rst-content table.field-list,.wy-table{border-collapse:collapse;border-spacing:0;empty-cells:show;margin-bottom:24px}.rst-content table.docutils caption,.rst-content table.field-list caption,.wy-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.rst-content table.docutils td,.rst-content table.docutils th,.rst-content table.field-list td,.rst-content table.field-list th,.wy-table td,.wy-table th{font-size:90%;margin:0;overflow:visible;padding:8px 16px}.rst-content table.docutils td:first-child,.rst-content table.docutils th:first-child,.rst-content table.field-list td:first-child,.rst-content table.field-list th:first-child,.wy-table td:first-child,.wy-table th:first-child{border-left-width:0}.rst-content table.docutils thead,.rst-content table.field-list thead,.wy-table thead{color:#000;text-align:left;vertical-align:bottom;white-space:nowrap}.rst-content table.docutils thead th,.rst-content table.field-list thead th,.wy-table thead th{font-weight:700;border-bottom:2px solid #e1e4e5}.rst-content table.docutils td,.rst-content table.field-list td,.wy-table td{background-color:transparent;vertical-align:middle}.rst-content table.docutils td p,.rst-content table.field-list td p,.wy-table td p{line-height:18px}.rst-content table.docutils td p:last-child,.rst-content table.field-list td p:last-child,.wy-table td p:last-child{margin-bottom:0}.rst-content table.docutils .wy-table-cell-min,.rst-content table.field-list .wy-table-cell-min,.wy-table .wy-table-cell-min{width:1%;padding-right:0}.rst-content table.docutils .wy-table-cell-min input[type=checkbox],.rst-content table.field-list .wy-table-cell-min input[type=checkbox],.wy-table .wy-table-cell-min input[type=checkbox]{margin:0}.wy-table-secondary{color:grey;font-size:90%}.wy-table-tertiary{color:grey;font-size:80%}.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,.wy-table-backed,.wy-table-odd td,.wy-table-striped tr:nth-child(2n-1) td{background-color:#f3f6f6}.rst-content table.docutils,.wy-table-bordered-all{border:1px solid #e1e4e5}.rst-content table.docutils td,.wy-table-bordered-all td{border-bottom:1px solid #e1e4e5;border-left:1px solid #e1e4e5}.rst-content table.docutils tbody>tr:last-child td,.wy-table-bordered-all tbody>tr:last-child td{border-bottom-width:0}.wy-table-bordered{border:1px solid #e1e4e5}.wy-table-bordered-rows td{border-bottom:1px solid #e1e4e5}.wy-table-bordered-rows tbody>tr:last-child td{border-bottom-width:0}.wy-table-horizontal td,.wy-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #e1e4e5}.wy-table-horizontal tbody>tr:last-child td{border-bottom-width:0}.wy-table-responsive{margin-bottom:24px;max-width:100%;overflow:auto}.wy-table-responsive table{margin-bottom:0!important}.wy-table-responsive table td,.wy-table-responsive table th{white-space:nowrap}a{color:#2980b9;text-decoration:none;cursor:pointer}a:hover{color:#3091d1}a:visited{color:#9b59b6}html{height:100%}body,html{overflow-x:hidden}body{font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-weight:400;color:#404040;min-height:100%;background:#edf0f2}.wy-text-left{text-align:left}.wy-text-center{text-align:center}.wy-text-right{text-align:right}.wy-text-large{font-size:120%}.wy-text-normal{font-size:100%}.wy-text-small,small{font-size:80%}.wy-text-strike{text-decoration:line-through}.wy-text-warning{color:#e67e22!important}a.wy-text-warning:hover{color:#eb9950!important}.wy-text-info{color:#2980b9!important}a.wy-text-info:hover{color:#409ad5!important}.wy-text-success{color:#27ae60!important}a.wy-text-success:hover{color:#36d278!important}.wy-text-danger{color:#e74c3c!important}a.wy-text-danger:hover{color:#ed7669!important}.wy-text-neutral{color:#404040!important}a.wy-text-neutral:hover{color:#595959!important}.rst-content .toctree-wrapper>p.caption,h1,h2,h3,h4,h5,h6,legend{margin-top:0;font-weight:700;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif}p{line-height:24px;font-size:16px;margin:0 0 24px}h1{font-size:175%}.rst-content .toctree-wrapper>p.caption,h2{font-size:150%}h3{font-size:125%}h4{font-size:115%}h5{font-size:110%}h6{font-size:100%}hr{display:block;height:1px;border:0;border-top:1px solid #e1e4e5;margin:24px 0;padding:0}.rst-content code,.rst-content tt,code{white-space:nowrap;max-width:100%;background:#fff;border:1px solid #e1e4e5;font-size:75%;padding:0 5px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#e74c3c;overflow-x:auto}.rst-content tt.code-large,code.code-large{font-size:90%}.rst-content .section ul,.rst-content .toctree-wrapper ul,.rst-content section ul,.wy-plain-list-disc,article ul{list-style:disc;line-height:24px;margin-bottom:24px}.rst-content .section ul li,.rst-content .toctree-wrapper ul li,.rst-content section ul li,.wy-plain-list-disc li,article ul li{list-style:disc;margin-left:24px}.rst-content .section ul li p:last-child,.rst-content .section ul li ul,.rst-content .toctree-wrapper ul li p:last-child,.rst-content .toctree-wrapper ul li ul,.rst-content section ul li p:last-child,.rst-content section ul li ul,.wy-plain-list-disc li p:last-child,.wy-plain-list-disc li ul,article ul li p:last-child,article ul li ul{margin-bottom:0}.rst-content .section ul li li,.rst-content .toctree-wrapper ul li li,.rst-content section ul li li,.wy-plain-list-disc li li,article ul li li{list-style:circle}.rst-content .section ul li li li,.rst-content .toctree-wrapper ul li li li,.rst-content section ul li li li,.wy-plain-list-disc li li li,article ul li li li{list-style:square}.rst-content .section ul li ol li,.rst-content .toctree-wrapper ul li ol li,.rst-content section ul li ol li,.wy-plain-list-disc li ol li,article ul li ol li{list-style:decimal}.rst-content .section ol,.rst-content .section ol.arabic,.rst-content .toctree-wrapper ol,.rst-content .toctree-wrapper ol.arabic,.rst-content section ol,.rst-content section ol.arabic,.wy-plain-list-decimal,article ol{list-style:decimal;line-height:24px;margin-bottom:24px}.rst-content .section ol.arabic li,.rst-content .section ol li,.rst-content .toctree-wrapper ol.arabic li,.rst-content .toctree-wrapper ol li,.rst-content section ol.arabic li,.rst-content section ol li,.wy-plain-list-decimal li,article ol li{list-style:decimal;margin-left:24px}.rst-content .section ol.arabic li ul,.rst-content .section ol li p:last-child,.rst-content .section ol li ul,.rst-content .toctree-wrapper ol.arabic li ul,.rst-content .toctree-wrapper ol li p:last-child,.rst-content .toctree-wrapper ol li ul,.rst-content section ol.arabic li ul,.rst-content section ol li p:last-child,.rst-content section ol li ul,.wy-plain-list-decimal li p:last-child,.wy-plain-list-decimal li ul,article ol li p:last-child,article ol li ul{margin-bottom:0}.rst-content .section ol.arabic li ul li,.rst-content .section ol li ul li,.rst-content .toctree-wrapper ol.arabic li ul li,.rst-content .toctree-wrapper ol li ul li,.rst-content section ol.arabic li ul li,.rst-content section ol li ul li,.wy-plain-list-decimal li ul li,article ol li ul li{list-style:disc}.wy-breadcrumbs{*zoom:1}.wy-breadcrumbs:after,.wy-breadcrumbs:before{display:table;content:""}.wy-breadcrumbs:after{clear:both}.wy-breadcrumbs>li{display:inline-block;padding-top:5px}.wy-breadcrumbs>li.wy-breadcrumbs-aside{float:right}.rst-content .wy-breadcrumbs>li code,.rst-content .wy-breadcrumbs>li tt,.wy-breadcrumbs>li .rst-content tt,.wy-breadcrumbs>li code{all:inherit;color:inherit}.breadcrumb-item:before{content:"/";color:#bbb;font-size:13px;padding:0 6px 0 3px}.wy-breadcrumbs-extra{margin-bottom:0;color:#b3b3b3;font-size:80%;display:inline-block}@media screen and (max-width:480px){.wy-breadcrumbs-extra,.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}@media print{.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}html{font-size:16px}.wy-affix{position:fixed;top:1.618em}.wy-menu a:hover{text-decoration:none}.wy-menu-horiz{*zoom:1}.wy-menu-horiz:after,.wy-menu-horiz:before{display:table;content:""}.wy-menu-horiz:after{clear:both}.wy-menu-horiz li,.wy-menu-horiz ul{display:inline-block}.wy-menu-horiz li:hover{background:hsla(0,0%,100%,.1)}.wy-menu-horiz li.divide-left{border-left:1px solid #404040}.wy-menu-horiz li.divide-right{border-right:1px solid #404040}.wy-menu-horiz a{height:32px;display:inline-block;line-height:32px;padding:0 16px}.wy-menu-vertical{width:300px}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#55a5d9;height:32px;line-height:32px;padding:0 1.618em;margin:12px 0 0;display:block;font-weight:700;text-transform:uppercase;font-size:85%;white-space:nowrap}.wy-menu-vertical ul{margin-bottom:0}.wy-menu-vertical li.divide-top{border-top:1px solid #404040}.wy-menu-vertical li.divide-bottom{border-bottom:1px solid #404040}.wy-menu-vertical li.current{background:#e3e3e3}.wy-menu-vertical li.current a{color:grey;border-right:1px solid #c9c9c9;padding:.4045em 2.427em}.wy-menu-vertical li.current a:hover{background:#d6d6d6}.rst-content .wy-menu-vertical li tt,.wy-menu-vertical li .rst-content tt,.wy-menu-vertical li code{border:none;background:inherit;color:inherit;padding-left:0;padding-right:0}.wy-menu-vertical li button.toctree-expand{display:block;float:left;margin-left:-1.2em;line-height:18px;color:#4d4d4d;border:none;background:none;padding:0}.wy-menu-vertical li.current>a,.wy-menu-vertical li.on a{color:#404040;font-weight:700;position:relative;background:#fcfcfc;border:none;padding:.4045em 1.618em}.wy-menu-vertical li.current>a:hover,.wy-menu-vertical li.on a:hover{background:#fcfcfc}.wy-menu-vertical li.current>a:hover button.toctree-expand,.wy-menu-vertical li.on a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand{display:block;line-height:18px;color:#333}.wy-menu-vertical li.toctree-l1.current>a{border-bottom:1px solid #c9c9c9;border-top:1px solid #c9c9c9}.wy-menu-vertical .toctree-l1.current .toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .toctree-l11>ul{display:none}.wy-menu-vertical .toctree-l1.current .current.toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .current.toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .current.toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .current.toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .current.toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .current.toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .current.toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .current.toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .current.toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .current.toctree-l11>ul{display:block}.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4{font-size:.9em}.wy-menu-vertical li.toctree-l2 a,.wy-menu-vertical li.toctree-l3 a,.wy-menu-vertical li.toctree-l4 a,.wy-menu-vertical li.toctree-l5 a,.wy-menu-vertical li.toctree-l6 a,.wy-menu-vertical li.toctree-l7 a,.wy-menu-vertical li.toctree-l8 a,.wy-menu-vertical li.toctree-l9 a,.wy-menu-vertical li.toctree-l10 a{color:#404040}.wy-menu-vertical li.toctree-l2 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l3 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l4 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l5 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l6 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l7 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l8 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l9 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l10 a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a,.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a,.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a,.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a,.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a,.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a,.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a,.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{display:block}.wy-menu-vertical li.toctree-l2.current>a{padding:.4045em 2.427em}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{padding:.4045em 1.618em .4045em 4.045em}.wy-menu-vertical li.toctree-l3.current>a{padding:.4045em 4.045em}.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{padding:.4045em 1.618em .4045em 5.663em}.wy-menu-vertical li.toctree-l4.current>a{padding:.4045em 5.663em}.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a{padding:.4045em 1.618em .4045em 7.281em}.wy-menu-vertical li.toctree-l5.current>a{padding:.4045em 7.281em}.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a{padding:.4045em 1.618em .4045em 8.899em}.wy-menu-vertical li.toctree-l6.current>a{padding:.4045em 8.899em}.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a{padding:.4045em 1.618em .4045em 10.517em}.wy-menu-vertical li.toctree-l7.current>a{padding:.4045em 10.517em}.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a{padding:.4045em 1.618em .4045em 12.135em}.wy-menu-vertical li.toctree-l8.current>a{padding:.4045em 12.135em}.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a{padding:.4045em 1.618em .4045em 13.753em}.wy-menu-vertical li.toctree-l9.current>a{padding:.4045em 13.753em}.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a{padding:.4045em 1.618em .4045em 15.371em}.wy-menu-vertical li.toctree-l10.current>a{padding:.4045em 15.371em}.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{padding:.4045em 1.618em .4045em 16.989em}.wy-menu-vertical li.toctree-l2.current>a,.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{background:#c9c9c9}.wy-menu-vertical li.toctree-l2 button.toctree-expand{color:#a3a3a3}.wy-menu-vertical li.toctree-l3.current>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{background:#bdbdbd}.wy-menu-vertical li.toctree-l3 button.toctree-expand{color:#969696}.wy-menu-vertical li.current ul{display:block}.wy-menu-vertical li ul{margin-bottom:0;display:none}.wy-menu-vertical li ul li a{margin-bottom:0;color:#d9d9d9;font-weight:400}.wy-menu-vertical a{line-height:18px;padding:.4045em 1.618em;display:block;position:relative;font-size:90%;color:#d9d9d9}.wy-menu-vertical a:hover{background-color:#4e4a4a;cursor:pointer}.wy-menu-vertical a:hover button.toctree-expand{color:#d9d9d9}.wy-menu-vertical a:active{background-color:#2980b9;cursor:pointer;color:#fff}.wy-menu-vertical a:active button.toctree-expand{color:#fff}.wy-side-nav-search{display:block;width:300px;padding:.809em;margin-bottom:.809em;z-index:200;background-color:#2980b9;text-align:center;color:#fcfcfc}.wy-side-nav-search input[type=text]{width:100%;border-radius:50px;padding:6px 12px;border-color:#2472a4}.wy-side-nav-search img{display:block;margin:auto auto .809em;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a{color:#fcfcfc;font-size:100%;font-weight:700;display:inline-block;padding:4px 6px;margin-bottom:.809em;max-width:100%}.wy-side-nav-search .wy-dropdown>a:hover,.wy-side-nav-search .wy-dropdown>aactive,.wy-side-nav-search .wy-dropdown>afocus,.wy-side-nav-search>a:hover,.wy-side-nav-search>aactive,.wy-side-nav-search>afocus{background:hsla(0,0%,100%,.1)}.wy-side-nav-search .wy-dropdown>a img.logo,.wy-side-nav-search>a img.logo{display:block;margin:0 auto;height:auto;width:auto;border-radius:0;max-width:100%;background:transparent}.wy-side-nav-search .wy-dropdown>a.icon,.wy-side-nav-search>a.icon{display:block}.wy-side-nav-search .wy-dropdown>a.icon img.logo,.wy-side-nav-search>a.icon img.logo{margin-top:.85em}.wy-side-nav-search>div.switch-menus{position:relative;display:block;margin-top:-.4045em;margin-bottom:.809em;font-weight:400;color:hsla(0,0%,100%,.3)}.wy-side-nav-search>div.switch-menus>div.language-switch,.wy-side-nav-search>div.switch-menus>div.version-switch{display:inline-block;padding:.2em}.wy-side-nav-search>div.switch-menus>div.language-switch select,.wy-side-nav-search>div.switch-menus>div.version-switch select{display:inline-block;margin-right:-2rem;padding-right:2rem;max-width:240px;text-align-last:center;background:none;border:none;border-radius:0;box-shadow:none;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-size:1em;font-weight:400;color:hsla(0,0%,100%,.3);cursor:pointer;appearance:none;-webkit-appearance:none;-moz-appearance:none}.wy-side-nav-search>div.switch-menus>div.language-switch select:active,.wy-side-nav-search>div.switch-menus>div.language-switch select:focus,.wy-side-nav-search>div.switch-menus>div.language-switch select:hover,.wy-side-nav-search>div.switch-menus>div.version-switch select:active,.wy-side-nav-search>div.switch-menus>div.version-switch select:focus,.wy-side-nav-search>div.switch-menus>div.version-switch select:hover{background:hsla(0,0%,100%,.1);color:hsla(0,0%,100%,.5)}.wy-side-nav-search>div.switch-menus>div.language-switch:has(>select):after,.wy-side-nav-search>div.switch-menus>div.version-switch:has(>select):after{display:inline-block;width:1.5em;height:100%;padding:.1em;content:"\f0d7";font-size:1em;line-height:1.2em;font-family:FontAwesome;text-align:center;pointer-events:none;box-sizing:border-box}.wy-nav .wy-menu-vertical header{color:#2980b9}.wy-nav .wy-menu-vertical a{color:#b3b3b3}.wy-nav .wy-menu-vertical a:hover{background-color:#2980b9;color:#fff}[data-menu-wrap]{-webkit-transition:all .2s ease-in;-moz-transition:all .2s ease-in;transition:all .2s ease-in;position:absolute;opacity:1;width:100%;opacity:0}[data-menu-wrap].move-center{left:0;right:auto;opacity:1}[data-menu-wrap].move-left{right:auto;left:-100%;opacity:0}[data-menu-wrap].move-right{right:-100%;left:auto;opacity:0}.wy-body-for-nav{background:#fcfcfc}.wy-grid-for-nav{position:absolute;width:100%;height:100%}.wy-nav-side{position:fixed;top:0;bottom:0;left:0;padding-bottom:2em;width:300px;overflow-x:hidden;overflow-y:hidden;min-height:100%;color:#9b9b9b;background:#343131;z-index:200}.wy-side-scroll{width:320px;position:relative;overflow-x:hidden;overflow-y:scroll;height:100%}.wy-nav-top{display:none;background:#2980b9;color:#fff;padding:.4045em .809em;position:relative;line-height:50px;text-align:center;font-size:100%;*zoom:1}.wy-nav-top:after,.wy-nav-top:before{display:table;content:""}.wy-nav-top:after{clear:both}.wy-nav-top a{color:#fff;font-weight:700}.wy-nav-top img{margin-right:12px;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-nav-top i{font-size:30px;float:left;cursor:pointer;padding-top:inherit}.wy-nav-content-wrap{margin-left:300px;background:#fcfcfc;min-height:100%}.wy-nav-content{padding:1.618em 3.236em;height:100%;max-width:800px;margin:auto}.wy-body-mask{position:fixed;width:100%;height:100%;background:rgba(0,0,0,.2);display:none;z-index:499}.wy-body-mask.on{display:block}footer{color:grey}footer p{margin-bottom:12px}.rst-content footer span.commit tt,footer span.commit .rst-content tt,footer span.commit code{padding:0;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:1em;background:none;border:none;color:grey}.rst-footer-buttons{*zoom:1}.rst-footer-buttons:after,.rst-footer-buttons:before{width:100%;display:table;content:""}.rst-footer-buttons:after{clear:both}.rst-breadcrumbs-buttons{margin-top:12px;*zoom:1}.rst-breadcrumbs-buttons:after,.rst-breadcrumbs-buttons:before{display:table;content:""}.rst-breadcrumbs-buttons:after{clear:both}#search-results .search li{margin-bottom:24px;border-bottom:1px solid #e1e4e5;padding-bottom:24px}#search-results .search li:first-child{border-top:1px solid #e1e4e5;padding-top:24px}#search-results .search li a{font-size:120%;margin-bottom:12px;display:inline-block}#search-results .context{color:grey;font-size:90%}.genindextable li>ul{margin-left:24px}@media screen and (max-width:768px){.wy-body-for-nav{background:#fcfcfc}.wy-nav-top{display:block}.wy-nav-side{left:-300px}.wy-nav-side.shift{width:85%;left:0}.wy-menu.wy-menu-vertical,.wy-side-nav-search,.wy-side-scroll{width:auto}.wy-nav-content-wrap{margin-left:0}.wy-nav-content-wrap .wy-nav-content{padding:1.618em}.wy-nav-content-wrap.shift{position:fixed;min-width:100%;left:85%;top:0;height:100%;overflow:hidden}}@media screen and (min-width:1100px){.wy-nav-content-wrap{background:rgba(0,0,0,.05)}.wy-nav-content{margin:0;background:#fcfcfc}}@media print{.rst-versions,.wy-nav-side,footer{display:none}.wy-nav-content-wrap{margin-left:0}}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:after,.rst-versions .rst-current-version:before{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-content .code-block-caption .rst-versions .rst-current-version .headerlink,.rst-content .eqno .rst-versions .rst-current-version .headerlink,.rst-content .rst-versions .rst-current-version .admonition-title,.rst-content code.download .rst-versions .rst-current-version span:first-child,.rst-content dl dt .rst-versions .rst-current-version .headerlink,.rst-content h1 .rst-versions .rst-current-version .headerlink,.rst-content h2 .rst-versions .rst-current-version .headerlink,.rst-content h3 .rst-versions .rst-current-version .headerlink,.rst-content h4 .rst-versions .rst-current-version .headerlink,.rst-content h5 .rst-versions .rst-current-version .headerlink,.rst-content h6 .rst-versions .rst-current-version .headerlink,.rst-content p .rst-versions .rst-current-version .headerlink,.rst-content table>caption .rst-versions .rst-current-version .headerlink,.rst-content tt.download .rst-versions .rst-current-version span:first-child,.rst-versions .rst-current-version .fa,.rst-versions .rst-current-version .icon,.rst-versions .rst-current-version .rst-content .admonition-title,.rst-versions .rst-current-version .rst-content .code-block-caption .headerlink,.rst-versions .rst-current-version .rst-content .eqno .headerlink,.rst-versions .rst-current-version .rst-content code.download span:first-child,.rst-versions .rst-current-version .rst-content dl dt .headerlink,.rst-versions .rst-current-version .rst-content h1 .headerlink,.rst-versions .rst-current-version .rst-content h2 .headerlink,.rst-versions .rst-current-version .rst-content h3 .headerlink,.rst-versions .rst-current-version .rst-content h4 .headerlink,.rst-versions .rst-current-version .rst-content h5 .headerlink,.rst-versions .rst-current-version .rst-content h6 .headerlink,.rst-versions .rst-current-version .rst-content p .headerlink,.rst-versions .rst-current-version .rst-content table>caption .headerlink,.rst-versions .rst-current-version .rst-content tt.download span:first-child,.rst-versions .rst-current-version .wy-menu-vertical li button.toctree-expand,.wy-menu-vertical li .rst-versions .rst-current-version button.toctree-expand{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions .rst-other-versions .rtd-current-item{font-weight:700}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}#flyout-search-form{padding:6px}.rst-content .toctree-wrapper>p.caption,.rst-content h1,.rst-content h2,.rst-content h3,.rst-content h4,.rst-content h5,.rst-content h6{margin-bottom:24px}.rst-content img{max-width:100%;height:auto}.rst-content div.figure,.rst-content figure{margin-bottom:24px}.rst-content div.figure .caption-text,.rst-content figure .caption-text{font-style:italic}.rst-content div.figure p:last-child.caption,.rst-content figure p:last-child.caption{margin-bottom:0}.rst-content div.figure.align-center,.rst-content figure.align-center{text-align:center}.rst-content .section>a>img,.rst-content .section>img,.rst-content section>a>img,.rst-content section>img{margin-bottom:24px}.rst-content abbr[title]{text-decoration:none}.rst-content.style-external-links a.reference.external:after{font-family:FontAwesome;content:"\f08e";color:#b3b3b3;vertical-align:super;font-size:60%;margin:0 .2em}.rst-content blockquote{margin-left:24px;line-height:24px;margin-bottom:24px}.rst-content pre.literal-block{white-space:pre;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;display:block;overflow:auto}.rst-content div[class^=highlight],.rst-content pre.literal-block{border:1px solid #e1e4e5;overflow-x:auto;margin:1px 0 24px}.rst-content div[class^=highlight] div[class^=highlight],.rst-content pre.literal-block div[class^=highlight]{padding:0;border:none;margin:0}.rst-content div[class^=highlight] td.code{width:100%}.rst-content .linenodiv pre{border-right:1px solid #e6e9ea;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;user-select:none;pointer-events:none}.rst-content div[class^=highlight] pre{white-space:pre;margin:0;padding:12px;display:block;overflow:auto}.rst-content div[class^=highlight] pre .hll{display:block;margin:0 -12px;padding:0 12px}.rst-content .linenodiv pre,.rst-content div[class^=highlight] pre,.rst-content pre.literal-block{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:12px;line-height:1.4}.rst-content div.highlight .gp,.rst-content div.highlight span.linenos{user-select:none;pointer-events:none}.rst-content div.highlight span.linenos{display:inline-block;padding-left:0;padding-right:12px;margin-right:12px;border-right:1px solid #e6e9ea}.rst-content .code-block-caption{font-style:italic;font-size:85%;line-height:1;padding:1em 0;text-align:center}@media print{.rst-content .codeblock,.rst-content div[class^=highlight],.rst-content div[class^=highlight] pre{white-space:pre-wrap}}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning{clear:both}.rst-content .admonition-todo .last,.rst-content .admonition-todo>:last-child,.rst-content .admonition .last,.rst-content .admonition>:last-child,.rst-content .attention .last,.rst-content .attention>:last-child,.rst-content .caution .last,.rst-content .caution>:last-child,.rst-content .danger .last,.rst-content .danger>:last-child,.rst-content .error .last,.rst-content .error>:last-child,.rst-content .hint .last,.rst-content .hint>:last-child,.rst-content .important .last,.rst-content .important>:last-child,.rst-content .note .last,.rst-content .note>:last-child,.rst-content .seealso .last,.rst-content .seealso>:last-child,.rst-content .tip .last,.rst-content .tip>:last-child,.rst-content .warning .last,.rst-content .warning>:last-child{margin-bottom:0}.rst-content .admonition-title:before{margin-right:4px}.rst-content .admonition table{border-color:rgba(0,0,0,.1)}.rst-content .admonition table td,.rst-content .admonition table th{background:transparent!important;border-color:rgba(0,0,0,.1)!important}.rst-content .section ol.loweralpha,.rst-content .section ol.loweralpha>li,.rst-content .toctree-wrapper ol.loweralpha,.rst-content .toctree-wrapper ol.loweralpha>li,.rst-content section ol.loweralpha,.rst-content section ol.loweralpha>li{list-style:lower-alpha}.rst-content .section ol.upperalpha,.rst-content .section ol.upperalpha>li,.rst-content .toctree-wrapper ol.upperalpha,.rst-content .toctree-wrapper ol.upperalpha>li,.rst-content section ol.upperalpha,.rst-content section ol.upperalpha>li{list-style:upper-alpha}.rst-content .section ol li>*,.rst-content .section ul li>*,.rst-content .toctree-wrapper ol li>*,.rst-content .toctree-wrapper ul li>*,.rst-content section ol li>*,.rst-content section ul li>*{margin-top:12px;margin-bottom:12px}.rst-content .section ol li>:first-child,.rst-content .section ul li>:first-child,.rst-content .toctree-wrapper ol li>:first-child,.rst-content .toctree-wrapper ul li>:first-child,.rst-content section ol li>:first-child,.rst-content section ul li>:first-child{margin-top:0}.rst-content .section ol li>p,.rst-content .section ol li>p:last-child,.rst-content .section ul li>p,.rst-content .section ul li>p:last-child,.rst-content .toctree-wrapper ol li>p,.rst-content .toctree-wrapper ol li>p:last-child,.rst-content .toctree-wrapper ul li>p,.rst-content .toctree-wrapper ul li>p:last-child,.rst-content section ol li>p,.rst-content section ol li>p:last-child,.rst-content section ul li>p,.rst-content section ul li>p:last-child{margin-bottom:12px}.rst-content .section ol li>p:only-child,.rst-content .section ol li>p:only-child:last-child,.rst-content .section ul li>p:only-child,.rst-content .section ul li>p:only-child:last-child,.rst-content .toctree-wrapper ol li>p:only-child,.rst-content .toctree-wrapper ol li>p:only-child:last-child,.rst-content .toctree-wrapper ul li>p:only-child,.rst-content .toctree-wrapper ul li>p:only-child:last-child,.rst-content section ol li>p:only-child,.rst-content section ol li>p:only-child:last-child,.rst-content section ul li>p:only-child,.rst-content section ul li>p:only-child:last-child{margin-bottom:0}.rst-content .section ol li>ol,.rst-content .section ol li>ul,.rst-content .section ul li>ol,.rst-content .section ul li>ul,.rst-content .toctree-wrapper ol li>ol,.rst-content .toctree-wrapper ol li>ul,.rst-content .toctree-wrapper ul li>ol,.rst-content .toctree-wrapper ul li>ul,.rst-content section ol li>ol,.rst-content section ol li>ul,.rst-content section ul li>ol,.rst-content section ul li>ul{margin-bottom:12px}.rst-content .section ol.simple li>*,.rst-content .section ol.simple li ol,.rst-content .section ol.simple li ul,.rst-content .section ul.simple li>*,.rst-content .section ul.simple li ol,.rst-content .section ul.simple li ul,.rst-content .toctree-wrapper ol.simple li>*,.rst-content .toctree-wrapper ol.simple li ol,.rst-content .toctree-wrapper ol.simple li ul,.rst-content .toctree-wrapper ul.simple li>*,.rst-content .toctree-wrapper ul.simple li ol,.rst-content .toctree-wrapper ul.simple li ul,.rst-content section ol.simple li>*,.rst-content section ol.simple li ol,.rst-content section ol.simple li ul,.rst-content section ul.simple li>*,.rst-content section ul.simple li ol,.rst-content section ul.simple li ul{margin-top:0;margin-bottom:0}.rst-content .line-block{margin-left:0;margin-bottom:24px;line-height:24px}.rst-content .line-block .line-block{margin-left:24px;margin-bottom:0}.rst-content .topic-title{font-weight:700;margin-bottom:12px}.rst-content .toc-backref{color:#404040}.rst-content .align-right{float:right;margin:0 0 24px 24px}.rst-content .align-left{float:left;margin:0 24px 24px 0}.rst-content .align-center{margin:auto}.rst-content .align-center:not(table){display:block}.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink{opacity:0;font-size:14px;font-family:FontAwesome;margin-left:.5em}.rst-content .code-block-caption .headerlink:focus,.rst-content .code-block-caption:hover .headerlink,.rst-content .eqno .headerlink:focus,.rst-content .eqno:hover .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink:focus,.rst-content .toctree-wrapper>p.caption:hover .headerlink,.rst-content dl dt .headerlink:focus,.rst-content dl dt:hover .headerlink,.rst-content h1 .headerlink:focus,.rst-content h1:hover .headerlink,.rst-content h2 .headerlink:focus,.rst-content h2:hover .headerlink,.rst-content h3 .headerlink:focus,.rst-content h3:hover .headerlink,.rst-content h4 .headerlink:focus,.rst-content h4:hover .headerlink,.rst-content h5 .headerlink:focus,.rst-content h5:hover .headerlink,.rst-content h6 .headerlink:focus,.rst-content h6:hover .headerlink,.rst-content p.caption .headerlink:focus,.rst-content p.caption:hover .headerlink,.rst-content p .headerlink:focus,.rst-content p:hover .headerlink,.rst-content table>caption .headerlink:focus,.rst-content table>caption:hover .headerlink{opacity:1}.rst-content p a{overflow-wrap:anywhere}.rst-content .wy-table td p,.rst-content .wy-table td ul,.rst-content .wy-table th p,.rst-content .wy-table th ul,.rst-content table.docutils td p,.rst-content table.docutils td ul,.rst-content table.docutils th p,.rst-content table.docutils th ul,.rst-content table.field-list td p,.rst-content table.field-list td ul,.rst-content table.field-list th p,.rst-content table.field-list th ul{font-size:inherit}.rst-content .btn:focus{outline:2px solid}.rst-content table>caption .headerlink:after{font-size:12px}.rst-content .centered{text-align:center}.rst-content .sidebar{float:right;width:40%;display:block;margin:0 0 24px 24px;padding:24px;background:#f3f6f6;border:1px solid #e1e4e5}.rst-content .sidebar dl,.rst-content .sidebar p,.rst-content .sidebar ul{font-size:90%}.rst-content .sidebar .last,.rst-content .sidebar>:last-child{margin-bottom:0}.rst-content .sidebar .sidebar-title{display:block;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif;font-weight:700;background:#e1e4e5;padding:6px 12px;margin:-24px -24px 24px;font-size:100%}.rst-content .highlighted{background:#f1c40f;box-shadow:0 0 0 2px #f1c40f;display:inline;font-weight:700}.rst-content .citation-reference,.rst-content .footnote-reference{vertical-align:baseline;position:relative;top:-.4em;line-height:0;font-size:90%}.rst-content .citation-reference>span.fn-bracket,.rst-content .footnote-reference>span.fn-bracket{display:none}.rst-content .hlist{width:100%}.rst-content dl dt span.classifier:before{content:" : "}.rst-content dl dt span.classifier-delimiter{display:none!important}html.writer-html4 .rst-content table.docutils.citation,html.writer-html4 .rst-content table.docutils.footnote{background:none;border:none}html.writer-html4 .rst-content table.docutils.citation td,html.writer-html4 .rst-content table.docutils.citation tr,html.writer-html4 .rst-content table.docutils.footnote td,html.writer-html4 .rst-content table.docutils.footnote tr{border:none;background-color:transparent!important;white-space:normal}html.writer-html4 .rst-content table.docutils.citation td.label,html.writer-html4 .rst-content table.docutils.footnote td.label{padding-left:0;padding-right:0;vertical-align:top}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{display:grid;grid-template-columns:auto minmax(80%,95%)}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{display:inline-grid;grid-template-columns:max-content auto}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{display:grid;grid-template-columns:auto auto minmax(.65rem,auto) minmax(40%,95%)}html.writer-html5 .rst-content aside.citation>span.label,html.writer-html5 .rst-content aside.footnote>span.label,html.writer-html5 .rst-content div.citation>span.label{grid-column-start:1;grid-column-end:2}html.writer-html5 .rst-content aside.citation>span.backrefs,html.writer-html5 .rst-content aside.footnote>span.backrefs,html.writer-html5 .rst-content div.citation>span.backrefs{grid-column-start:2;grid-column-end:3;grid-row-start:1;grid-row-end:3}html.writer-html5 .rst-content aside.citation>p,html.writer-html5 .rst-content aside.footnote>p,html.writer-html5 .rst-content div.citation>p{grid-column-start:4;grid-column-end:5}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{margin-bottom:24px}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{padding-left:1rem}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dd,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dd,html.writer-html5 .rst-content dl.footnote>dt{margin-bottom:0}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{font-size:.9rem}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.footnote>dt{margin:0 .5rem .5rem 0;line-height:1.2rem;word-break:break-all;font-weight:400}html.writer-html5 .rst-content dl.citation>dt>span.brackets:before,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:before{content:"["}html.writer-html5 .rst-content dl.citation>dt>span.brackets:after,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:after{content:"]"}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a{word-break:keep-all}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a:not(:first-child):before,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.footnote>dd{margin:0 0 .5rem;line-height:1.2rem}html.writer-html5 .rst-content dl.citation>dd p,html.writer-html5 .rst-content dl.footnote>dd p{font-size:.9rem}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{padding-left:1rem;padding-right:1rem;font-size:.9rem;line-height:1.2rem}html.writer-html5 .rst-content aside.citation p,html.writer-html5 .rst-content aside.footnote p,html.writer-html5 .rst-content div.citation p{font-size:.9rem;line-height:1.2rem;margin-bottom:12px}html.writer-html5 .rst-content aside.citation span.backrefs,html.writer-html5 .rst-content aside.footnote span.backrefs,html.writer-html5 .rst-content div.citation span.backrefs{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content aside.citation span.backrefs>a,html.writer-html5 .rst-content aside.footnote span.backrefs>a,html.writer-html5 .rst-content div.citation span.backrefs>a{word-break:keep-all}html.writer-html5 .rst-content aside.citation span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content aside.footnote span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content div.citation span.backrefs>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content aside.citation span.label,html.writer-html5 .rst-content aside.footnote span.label,html.writer-html5 .rst-content div.citation span.label{line-height:1.2rem}html.writer-html5 .rst-content aside.citation-list,html.writer-html5 .rst-content aside.footnote-list,html.writer-html5 .rst-content div.citation-list{margin-bottom:24px}html.writer-html5 .rst-content dl.option-list kbd{font-size:.9rem}.rst-content table.docutils.footnote,html.writer-html4 .rst-content table.docutils.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content aside.footnote-list aside.footnote,html.writer-html5 .rst-content div.citation-list>div.citation,html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{color:grey}.rst-content table.docutils.footnote code,.rst-content table.docutils.footnote tt,html.writer-html4 .rst-content table.docutils.citation code,html.writer-html4 .rst-content table.docutils.citation tt,html.writer-html5 .rst-content aside.footnote-list aside.footnote code,html.writer-html5 .rst-content aside.footnote-list aside.footnote tt,html.writer-html5 .rst-content aside.footnote code,html.writer-html5 .rst-content aside.footnote tt,html.writer-html5 .rst-content div.citation-list>div.citation code,html.writer-html5 .rst-content div.citation-list>div.citation tt,html.writer-html5 .rst-content dl.citation code,html.writer-html5 .rst-content dl.citation tt,html.writer-html5 .rst-content dl.footnote code,html.writer-html5 .rst-content dl.footnote tt{color:#555}.rst-content .wy-table-responsive.citation,.rst-content .wy-table-responsive.footnote{margin-bottom:0}.rst-content .wy-table-responsive.citation+:not(.citation),.rst-content .wy-table-responsive.footnote+:not(.footnote){margin-top:24px}.rst-content .wy-table-responsive.citation:last-child,.rst-content .wy-table-responsive.footnote:last-child{margin-bottom:24px}.rst-content table.docutils th{border-color:#e1e4e5}html.writer-html5 .rst-content table.docutils th{border:1px solid #e1e4e5}html.writer-html5 .rst-content table.docutils td>p,html.writer-html5 .rst-content table.docutils th>p{line-height:1rem;margin-bottom:0;font-size:.9rem}.rst-content table.docutils td .last,.rst-content table.docutils td .last>:last-child{margin-bottom:0}.rst-content table.field-list,.rst-content table.field-list td{border:none}.rst-content table.field-list td p{line-height:inherit}.rst-content table.field-list td>strong{display:inline-block}.rst-content table.field-list .field-name{padding-right:10px;text-align:left;white-space:nowrap}.rst-content table.field-list .field-body{text-align:left}.rst-content code,.rst-content tt{color:#000;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;padding:2px 5px}.rst-content code big,.rst-content code em,.rst-content tt big,.rst-content tt em{font-size:100%!important;line-height:normal}.rst-content code.literal,.rst-content tt.literal{color:#e74c3c;white-space:normal}.rst-content code.xref,.rst-content tt.xref,a .rst-content code,a .rst-content tt{font-weight:700;color:#404040;overflow-wrap:normal}.rst-content kbd,.rst-content pre,.rst-content samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace}.rst-content a code,.rst-content a tt{color:#2980b9}.rst-content dl{margin-bottom:24px}.rst-content dl dt{font-weight:700;margin-bottom:12px}.rst-content dl ol,.rst-content dl p,.rst-content dl table,.rst-content dl ul{margin-bottom:12px}.rst-content dl dd{margin:0 0 12px 24px;line-height:24px}.rst-content dl dd>ol:last-child,.rst-content dl dd>p:last-child,.rst-content dl dd>table:last-child,.rst-content dl dd>ul:last-child{margin-bottom:0}html.writer-html4 .rst-content dl:not(.docutils),html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple){margin-bottom:24px}html.writer-html4 .rst-content dl:not(.docutils)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980b9;border-top:3px solid #6ab0de;padding:6px;position:relative}html.writer-html4 .rst-content dl:not(.docutils)>dt:before,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:before{color:#6ab0de}html.writer-html4 .rst-content dl:not(.docutils)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{margin-bottom:6px;border:none;border-left:3px solid #ccc;background:#f0f0f0;color:#555}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils)>dt:first-child,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:first-child{margin-top:0}html.writer-html4 .rst-content dl:not(.docutils) code.descclassname,html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descclassname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{background-color:transparent;border:none;padding:0;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .optional,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .property,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .property{display:inline-block;padding-right:8px;max-width:100%}html.writer-html4 .rst-content dl:not(.docutils) .k,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .k{font-style:italic}html.writer-html4 .rst-content dl:not(.docutils) .descclassname,html.writer-html4 .rst-content dl:not(.docutils) .descname,html.writer-html4 .rst-content dl:not(.docutils) .sig-name,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .sig-name{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#000}.rst-content .viewcode-back,.rst-content .viewcode-link{display:inline-block;color:#27ae60;font-size:80%;padding-left:24px}.rst-content .viewcode-back{display:block;float:right}.rst-content p.rubric{margin-bottom:12px;font-weight:700}.rst-content code.download,.rst-content tt.download{background:inherit;padding:inherit;font-weight:400;font-family:inherit;font-size:inherit;color:inherit;border:inherit;white-space:inherit}.rst-content code.download span:first-child,.rst-content tt.download span:first-child{-webkit-font-smoothing:subpixel-antialiased}.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{margin-right:4px}.rst-content .guilabel,.rst-content .menuselection{font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}.rst-content .guilabel,.rst-content .menuselection{border:1px solid #7fbbe3;background:#e7f2fa}.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>.kbd,.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>kbd{color:inherit;font-size:80%;background-color:#fff;border:1px solid #a6a6a6;border-radius:4px;box-shadow:0 2px grey;padding:2.4px 6px;margin:auto 0}.rst-content .versionmodified{font-style:italic}@media screen and (max-width:480px){.rst-content .sidebar{width:100%}}span[id*=MathJax-Span]{color:#404040}.math{text-align:center}@font-face{font-family:Lato;src:url(fonts/lato-normal.woff2?bd03a2cc277bbbc338d464e679fe9942) format("woff2"),url(fonts/lato-normal.woff?27bd77b9162d388cb8d4c4217c7c5e2a) format("woff");font-weight:400;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold.woff2?cccb897485813c7c256901dbca54ecf2) format("woff2"),url(fonts/lato-bold.woff?d878b6c29b10beca227e9eef4246111b) format("woff");font-weight:700;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold-italic.woff2?0b6bb6725576b072c5d0b02ecdd1900d) format("woff2"),url(fonts/lato-bold-italic.woff?9c7e4e9eb485b4a121c760e61bc3707c) format("woff");font-weight:700;font-style:italic;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-normal-italic.woff2?4eb103b4d12be57cb1d040ed5e162e9d) format("woff2"),url(fonts/lato-normal-italic.woff?f28f2d6482446544ef1ea1ccc6dd5892) format("woff");font-weight:400;font-style:italic;font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:400;src:url(fonts/Roboto-Slab-Regular.woff2?7abf5b8d04d26a2cafea937019bca958) format("woff2"),url(fonts/Roboto-Slab-Regular.woff?c1be9284088d487c5e3ff0a10a92e58c) format("woff");font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:700;src:url(fonts/Roboto-Slab-Bold.woff2?9984f4a9bda09be08e83f2506954adbe) format("woff2"),url(fonts/Roboto-Slab-Bold.woff?bed5564a116b05148e3b3bea6fb1162a) format("woff");font-display:block} \ No newline at end of file diff --git a/_static/doctools.js b/_static/doctools.js new file mode 100644 index 00000000..4d67807d --- /dev/null +++ b/_static/doctools.js @@ -0,0 +1,156 @@ +/* + * doctools.js + * ~~~~~~~~~~~ + * + * Base JavaScript utilities for all Sphinx HTML documentation. + * + * :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([ + "TEXTAREA", + "INPUT", + "SELECT", + "BUTTON", +]); + +const _ready = (callback) => { + if (document.readyState !== "loading") { + callback(); + } else { + document.addEventListener("DOMContentLoaded", callback); + } +}; + +/** + * Small JavaScript module for the documentation. + */ +const Documentation = { + init: () => { + Documentation.initDomainIndexTable(); + Documentation.initOnKeyListeners(); + }, + + /** + * i18n support + */ + TRANSLATIONS: {}, + PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), + LOCALE: "unknown", + + // gettext and ngettext don't access this so that the functions + // can safely bound to a different name (_ = Documentation.gettext) + gettext: (string) => { + const translated = Documentation.TRANSLATIONS[string]; + switch (typeof translated) { + case "undefined": + return string; // no translation + case "string": + return translated; // translation exists + default: + return translated[0]; // (singular, plural) translation tuple exists + } + }, + + ngettext: (singular, plural, n) => { + const translated = Documentation.TRANSLATIONS[singular]; + if (typeof translated !== "undefined") + return translated[Documentation.PLURAL_EXPR(n)]; + return n === 1 ? singular : plural; + }, + + addTranslations: (catalog) => { + Object.assign(Documentation.TRANSLATIONS, catalog.messages); + Documentation.PLURAL_EXPR = new Function( + "n", + `return (${catalog.plural_expr})` + ); + Documentation.LOCALE = catalog.locale; + }, + + /** + * helper function to focus on search bar + */ + focusSearchBar: () => { + document.querySelectorAll("input[name=q]")[0]?.focus(); + }, + + /** + * Initialise the domain index toggle buttons + */ + initDomainIndexTable: () => { + const toggler = (el) => { + const idNumber = el.id.substr(7); + const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); + if (el.src.substr(-9) === "minus.png") { + el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; + toggledRows.forEach((el) => (el.style.display = "none")); + } else { + el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; + toggledRows.forEach((el) => (el.style.display = "")); + } + }; + + const togglerElements = document.querySelectorAll("img.toggler"); + togglerElements.forEach((el) => + el.addEventListener("click", (event) => toggler(event.currentTarget)) + ); + togglerElements.forEach((el) => (el.style.display = "")); + if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); + }, + + initOnKeyListeners: () => { + // only install a listener if it is really needed + if ( + !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && + !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS + ) + return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.altKey || event.ctrlKey || event.metaKey) return; + + if (!event.shiftKey) { + switch (event.key) { + case "ArrowLeft": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const prevLink = document.querySelector('link[rel="prev"]'); + if (prevLink && prevLink.href) { + window.location.href = prevLink.href; + event.preventDefault(); + } + break; + case "ArrowRight": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const nextLink = document.querySelector('link[rel="next"]'); + if (nextLink && nextLink.href) { + window.location.href = nextLink.href; + event.preventDefault(); + } + break; + } + } + + // some keyboard layouts may need Shift to get / + switch (event.key) { + case "/": + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; + Documentation.focusSearchBar(); + event.preventDefault(); + } + }); + }, +}; + +// quick alias for translations +const _ = Documentation.gettext; + +_ready(Documentation.init); diff --git a/_static/documentation_options.js b/_static/documentation_options.js new file mode 100644 index 00000000..860861c0 --- /dev/null +++ b/_static/documentation_options.js @@ -0,0 +1,13 @@ +const DOCUMENTATION_OPTIONS = { + VERSION: 'v0.6.3', + LANGUAGE: 'en', + COLLAPSE_INDEX: false, + BUILDER: 'html', + FILE_SUFFIX: '.html', + LINK_SUFFIX: '.html', + HAS_SOURCE: true, + SOURCELINK_SUFFIX: '.txt', + NAVIGATION_WITH_KEYS: false, + SHOW_SEARCH_SUMMARY: true, + ENABLE_SEARCH_SHORTCUTS: true, +}; \ No newline at end of file diff --git a/_static/file.png b/_static/file.png new file mode 100644 index 00000000..a858a410 Binary files /dev/null and b/_static/file.png differ diff --git a/_static/fonts/Lato/lato-bold.eot b/_static/fonts/Lato/lato-bold.eot new file mode 100644 index 00000000..3361183a Binary files /dev/null and b/_static/fonts/Lato/lato-bold.eot differ diff --git a/_static/fonts/Lato/lato-bold.ttf b/_static/fonts/Lato/lato-bold.ttf new file mode 100644 index 00000000..29f691d5 Binary files /dev/null and b/_static/fonts/Lato/lato-bold.ttf differ diff --git a/_static/fonts/Lato/lato-bold.woff b/_static/fonts/Lato/lato-bold.woff new file mode 100644 index 00000000..c6dff51f Binary files /dev/null and b/_static/fonts/Lato/lato-bold.woff differ diff --git a/_static/fonts/Lato/lato-bold.woff2 b/_static/fonts/Lato/lato-bold.woff2 new file mode 100644 index 00000000..bb195043 Binary files /dev/null and b/_static/fonts/Lato/lato-bold.woff2 differ diff --git a/_static/fonts/Lato/lato-bolditalic.eot b/_static/fonts/Lato/lato-bolditalic.eot new file mode 100644 index 00000000..3d415493 Binary files /dev/null and b/_static/fonts/Lato/lato-bolditalic.eot differ diff --git a/_static/fonts/Lato/lato-bolditalic.ttf b/_static/fonts/Lato/lato-bolditalic.ttf new file mode 100644 index 00000000..f402040b Binary files /dev/null and b/_static/fonts/Lato/lato-bolditalic.ttf differ diff --git a/_static/fonts/Lato/lato-bolditalic.woff b/_static/fonts/Lato/lato-bolditalic.woff new file mode 100644 index 00000000..88ad05b9 Binary files /dev/null and b/_static/fonts/Lato/lato-bolditalic.woff differ diff --git a/_static/fonts/Lato/lato-bolditalic.woff2 b/_static/fonts/Lato/lato-bolditalic.woff2 new file mode 100644 index 00000000..c4e3d804 Binary files /dev/null and b/_static/fonts/Lato/lato-bolditalic.woff2 differ diff --git a/_static/fonts/Lato/lato-italic.eot b/_static/fonts/Lato/lato-italic.eot new file mode 100644 index 00000000..3f826421 Binary files /dev/null and b/_static/fonts/Lato/lato-italic.eot differ diff --git a/_static/fonts/Lato/lato-italic.ttf b/_static/fonts/Lato/lato-italic.ttf new file mode 100644 index 00000000..b4bfc9b2 Binary files /dev/null and b/_static/fonts/Lato/lato-italic.ttf differ diff --git a/_static/fonts/Lato/lato-italic.woff b/_static/fonts/Lato/lato-italic.woff new file mode 100644 index 00000000..76114bc0 Binary files /dev/null and b/_static/fonts/Lato/lato-italic.woff differ diff --git a/_static/fonts/Lato/lato-italic.woff2 b/_static/fonts/Lato/lato-italic.woff2 new file mode 100644 index 00000000..3404f37e Binary files /dev/null and b/_static/fonts/Lato/lato-italic.woff2 differ diff --git a/_static/fonts/Lato/lato-regular.eot b/_static/fonts/Lato/lato-regular.eot new file mode 100644 index 00000000..11e3f2a5 Binary files /dev/null and b/_static/fonts/Lato/lato-regular.eot differ diff --git a/_static/fonts/Lato/lato-regular.ttf b/_static/fonts/Lato/lato-regular.ttf new file mode 100644 index 00000000..74decd9e Binary files /dev/null and b/_static/fonts/Lato/lato-regular.ttf differ diff --git a/_static/fonts/Lato/lato-regular.woff b/_static/fonts/Lato/lato-regular.woff new file mode 100644 index 00000000..ae1307ff Binary files /dev/null and b/_static/fonts/Lato/lato-regular.woff differ diff --git a/_static/fonts/Lato/lato-regular.woff2 b/_static/fonts/Lato/lato-regular.woff2 new file mode 100644 index 00000000..3bf98433 Binary files /dev/null and b/_static/fonts/Lato/lato-regular.woff2 differ diff --git a/_static/fonts/RobotoSlab/roboto-slab-v7-bold.eot b/_static/fonts/RobotoSlab/roboto-slab-v7-bold.eot new file mode 100644 index 00000000..79dc8efe Binary files /dev/null and b/_static/fonts/RobotoSlab/roboto-slab-v7-bold.eot differ diff --git a/_static/fonts/RobotoSlab/roboto-slab-v7-bold.ttf b/_static/fonts/RobotoSlab/roboto-slab-v7-bold.ttf new file mode 100644 index 00000000..df5d1df2 Binary files /dev/null and b/_static/fonts/RobotoSlab/roboto-slab-v7-bold.ttf differ diff --git a/_static/fonts/RobotoSlab/roboto-slab-v7-bold.woff b/_static/fonts/RobotoSlab/roboto-slab-v7-bold.woff new file mode 100644 index 00000000..6cb60000 Binary files /dev/null and b/_static/fonts/RobotoSlab/roboto-slab-v7-bold.woff differ diff --git a/_static/fonts/RobotoSlab/roboto-slab-v7-bold.woff2 b/_static/fonts/RobotoSlab/roboto-slab-v7-bold.woff2 new file mode 100644 index 00000000..7059e231 Binary files /dev/null and b/_static/fonts/RobotoSlab/roboto-slab-v7-bold.woff2 differ diff --git a/_static/fonts/RobotoSlab/roboto-slab-v7-regular.eot b/_static/fonts/RobotoSlab/roboto-slab-v7-regular.eot new file mode 100644 index 00000000..2f7ca78a Binary files /dev/null and b/_static/fonts/RobotoSlab/roboto-slab-v7-regular.eot differ diff --git a/_static/fonts/RobotoSlab/roboto-slab-v7-regular.ttf b/_static/fonts/RobotoSlab/roboto-slab-v7-regular.ttf new file mode 100644 index 00000000..eb52a790 Binary files /dev/null and b/_static/fonts/RobotoSlab/roboto-slab-v7-regular.ttf differ diff --git a/_static/fonts/RobotoSlab/roboto-slab-v7-regular.woff b/_static/fonts/RobotoSlab/roboto-slab-v7-regular.woff new file mode 100644 index 00000000..f815f63f Binary files /dev/null and b/_static/fonts/RobotoSlab/roboto-slab-v7-regular.woff differ diff --git a/_static/fonts/RobotoSlab/roboto-slab-v7-regular.woff2 b/_static/fonts/RobotoSlab/roboto-slab-v7-regular.woff2 new file mode 100644 index 00000000..f2c76e5b Binary files /dev/null and b/_static/fonts/RobotoSlab/roboto-slab-v7-regular.woff2 differ diff --git a/_static/jquery.js b/_static/jquery.js new file mode 100644 index 00000000..c4c6022f --- /dev/null +++ b/_static/jquery.js @@ -0,0 +1,2 @@ +/*! jQuery v3.6.0 | (c) OpenJS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType&&"function"!=typeof e.item},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.6.0",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},j=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||D,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,D=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function je(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Le(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var _t,zt=[],Ut=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=zt.pop()||S.expando+"_"+wt.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Ut.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ut.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Ut,"$1"+r):!1!==e.jsonp&&(e.url+=(Tt.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,zt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((_t=E.implementation.createHTMLDocument("").body).innerHTML="
",2===_t.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=Fe(y.pixelPosition,function(e,t){if(t)return t=We(e,n),Pe.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0"),n("table.docutils.footnote").wrap("
"),n("table.docutils.citation").wrap("
"),n(".wy-menu-vertical ul").not(".simple").siblings("a").each((function(){var t=n(this);expand=n(''),expand.on("click",(function(n){return e.toggleCurrent(t),n.stopPropagation(),!1})),t.prepend(expand)}))},reset:function(){var n=encodeURI(window.location.hash)||"#";try{var e=$(".wy-menu-vertical"),t=e.find('[href="'+n+'"]');if(0===t.length){var i=$('.document [id="'+n.substring(1)+'"]').closest("div.section");0===(t=e.find('[href="#'+i.attr("id")+'"]')).length&&(t=e.find('[href="#"]'))}if(t.length>0){$(".wy-menu-vertical .current").removeClass("current").attr("aria-expanded","false"),t.addClass("current").attr("aria-expanded","true"),t.closest("li.toctree-l1").parent().addClass("current").attr("aria-expanded","true");for(let n=1;n<=10;n++)t.closest("li.toctree-l"+n).addClass("current").attr("aria-expanded","true");t[0].scrollIntoView()}}catch(n){console.log("Error expanding nav for anchor",n)}},onScroll:function(){this.winScroll=!1;var n=this.win.scrollTop(),e=n+this.winHeight,t=this.navBar.scrollTop()+(n-this.winPosition);n<0||e>this.docHeight||(this.navBar.scrollTop(t),this.winPosition=n)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",(function(){this.linkScroll=!1}))},toggleCurrent:function(n){var e=n.closest("li");e.siblings("li.current").removeClass("current").attr("aria-expanded","false"),e.siblings().find("li.current").removeClass("current").attr("aria-expanded","false");var t=e.find("> ul li");t.length&&(t.removeClass("current").attr("aria-expanded","false"),e.toggleClass("current").attr("aria-expanded",(function(n,e){return"true"==e?"false":"true"})))}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:n.exports.ThemeNav,StickyNav:n.exports.ThemeNav}),function(){for(var n=0,e=["ms","moz","webkit","o"],t=0;t +
Languages
+ ${config.projects.translations + .map( + (translation) => ` +
+ ${translation.language.code} +
+ `, + ) + .join("\n")} + + `; + return languagesHTML; + } + + function renderVersions(config) { + if (!config.versions.active.length) { + return ""; + } + const versionsHTML = ` +
+
Versions
+ ${config.versions.active + .map( + (version) => ` +
+ ${version.slug} +
+ `, + ) + .join("\n")} +
+ `; + return versionsHTML; + } + + function renderDownloads(config) { + if (!Object.keys(config.versions.current.downloads).length) { + return ""; + } + const downloadsNameDisplay = { + pdf: "PDF", + epub: "Epub", + htmlzip: "HTML", + }; + + const downloadsHTML = ` +
+
Downloads
+ ${Object.entries(config.versions.current.downloads) + .map( + ([name, url]) => ` +
+ ${downloadsNameDisplay[name]} +
+ `, + ) + .join("\n")} +
+ `; + return downloadsHTML; + } + + document.addEventListener("readthedocs-addons-data-ready", function (event) { + const config = event.detail.data(); + + const flyout = ` +
+ + Read the Docs + v: ${config.versions.current.slug} + + +
+
+ ${renderLanguages(config)} + ${renderVersions(config)} + ${renderDownloads(config)} +
+
On Read the Docs
+
+ Project Home +
+
+ Builds +
+
+ Downloads +
+
+
+
Search
+
+
+ +
+
+
+
+ + Hosted by Read the Docs + +
+
+ `; + + // Inject the generated flyout into the body HTML element. + document.body.insertAdjacentHTML("beforeend", flyout); + + // Trigger the Read the Docs Addons Search modal when clicking on the "Search docs" input from inside the flyout. + document + .querySelector("#flyout-search-form") + .addEventListener("focusin", () => { + const event = new CustomEvent("readthedocs-search-show"); + document.dispatchEvent(event); + }); + }) +} + +if (themeLanguageSelector || themeVersionSelector) { + function onSelectorSwitch(event) { + const option = event.target.selectedIndex; + const item = event.target.options[option]; + window.location.href = item.dataset.url; + } + + document.addEventListener("readthedocs-addons-data-ready", function (event) { + const config = event.detail.data(); + + const versionSwitch = document.querySelector( + "div.switch-menus > div.version-switch", + ); + if (themeVersionSelector) { + let versions = config.versions.active; + if (config.versions.current.hidden || config.versions.current.type === "external") { + versions.unshift(config.versions.current); + } + const versionSelect = ` + + `; + + versionSwitch.innerHTML = versionSelect; + versionSwitch.firstElementChild.addEventListener("change", onSelectorSwitch); + } + + const languageSwitch = document.querySelector( + "div.switch-menus > div.language-switch", + ); + + if (themeLanguageSelector) { + if (config.projects.translations.length) { + // Add the current language to the options on the selector + let languages = config.projects.translations.concat( + config.projects.current, + ); + languages = languages.sort((a, b) => + a.language.name.localeCompare(b.language.name), + ); + + const languageSelect = ` + + `; + + languageSwitch.innerHTML = languageSelect; + languageSwitch.firstElementChild.addEventListener("change", onSelectorSwitch); + } + else { + languageSwitch.remove(); + } + } + }); +} + +document.addEventListener("readthedocs-addons-data-ready", function (event) { + // Trigger the Read the Docs Addons Search modal when clicking on "Search docs" input from the topnav. + document + .querySelector("[role='search'] input") + .addEventListener("focusin", () => { + const event = new CustomEvent("readthedocs-search-show"); + document.dispatchEvent(event); + }); +}); \ No newline at end of file diff --git a/_static/jupyterlite_badge_logo.svg b/_static/jupyterlite_badge_logo.svg new file mode 100644 index 00000000..5de36d7f --- /dev/null +++ b/_static/jupyterlite_badge_logo.svg @@ -0,0 +1,3 @@ + + +launchlaunchlitelite \ No newline at end of file diff --git a/_static/language_data.js b/_static/language_data.js new file mode 100644 index 00000000..367b8ed8 --- /dev/null +++ b/_static/language_data.js @@ -0,0 +1,199 @@ +/* + * language_data.js + * ~~~~~~~~~~~~~~~~ + * + * This script contains the language-specific data used by searchtools.js, + * namely the list of stopwords, stemmer, scorer and splitter. + * + * :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +var stopwords = ["a", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "near", "no", "not", "of", "on", "or", "such", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with"]; + + +/* Non-minified version is copied as a separate JS file, if available */ + +/** + * Porter Stemmer + */ +var Stemmer = function() { + + var step2list = { + ational: 'ate', + tional: 'tion', + enci: 'ence', + anci: 'ance', + izer: 'ize', + bli: 'ble', + alli: 'al', + entli: 'ent', + eli: 'e', + ousli: 'ous', + ization: 'ize', + ation: 'ate', + ator: 'ate', + alism: 'al', + iveness: 'ive', + fulness: 'ful', + ousness: 'ous', + aliti: 'al', + iviti: 'ive', + biliti: 'ble', + logi: 'log' + }; + + var step3list = { + icate: 'ic', + ative: '', + alize: 'al', + iciti: 'ic', + ical: 'ic', + ful: '', + ness: '' + }; + + var c = "[^aeiou]"; // consonant + var v = "[aeiouy]"; // vowel + var C = c + "[^aeiouy]*"; // consonant sequence + var V = v + "[aeiou]*"; // vowel sequence + + var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0 + var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 + var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 + var s_v = "^(" + C + ")?" + v; // vowel in stem + + this.stemWord = function (w) { + var stem; + var suffix; + var firstch; + var origword = w; + + if (w.length < 3) + return w; + + var re; + var re2; + var re3; + var re4; + + firstch = w.substr(0,1); + if (firstch == "y") + w = firstch.toUpperCase() + w.substr(1); + + // Step 1a + re = /^(.+?)(ss|i)es$/; + re2 = /^(.+?)([^s])s$/; + + if (re.test(w)) + w = w.replace(re,"$1$2"); + else if (re2.test(w)) + w = w.replace(re2,"$1$2"); + + // Step 1b + re = /^(.+?)eed$/; + re2 = /^(.+?)(ed|ing)$/; + if (re.test(w)) { + var fp = re.exec(w); + re = new RegExp(mgr0); + if (re.test(fp[1])) { + re = /.$/; + w = w.replace(re,""); + } + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = new RegExp(s_v); + if (re2.test(stem)) { + w = stem; + re2 = /(at|bl|iz)$/; + re3 = new RegExp("([^aeiouylsz])\\1$"); + re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re2.test(w)) + w = w + "e"; + else if (re3.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + else if (re4.test(w)) + w = w + "e"; + } + } + + // Step 1c + re = /^(.+?)y$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(s_v); + if (re.test(stem)) + w = stem + "i"; + } + + // Step 2 + re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step2list[suffix]; + } + + // Step 3 + re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step3list[suffix]; + } + + // Step 4 + re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + re2 = /^(.+?)(s|t)(ion)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + if (re.test(stem)) + w = stem; + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = new RegExp(mgr1); + if (re2.test(stem)) + w = stem; + } + + // Step 5 + re = /^(.+?)e$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + re2 = new RegExp(meq1); + re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) + w = stem; + } + re = /ll$/; + re2 = new RegExp(mgr1); + if (re.test(w) && re2.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + + // and turn initial Y back to y + if (firstch == "y") + w = firstch.toLowerCase() + w.substr(1); + return w; + } +} + diff --git a/_static/minus.png b/_static/minus.png new file mode 100644 index 00000000..d96755fd Binary files /dev/null and b/_static/minus.png differ diff --git a/_static/no_image.png b/_static/no_image.png new file mode 100644 index 00000000..8c2d48d5 Binary files /dev/null and b/_static/no_image.png differ diff --git a/_static/plus.png b/_static/plus.png new file mode 100644 index 00000000..7107cec9 Binary files /dev/null and b/_static/plus.png differ diff --git a/_static/pygments.css b/_static/pygments.css new file mode 100644 index 00000000..84ab3030 --- /dev/null +++ b/_static/pygments.css @@ -0,0 +1,75 @@ +pre { line-height: 125%; } +td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +span.linenos { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +.highlight .hll { background-color: #ffffcc } +.highlight { background: #f8f8f8; } +.highlight .c { color: #3D7B7B; font-style: italic } /* Comment */ +.highlight .err { border: 1px solid #FF0000 } /* Error */ +.highlight .k { color: #008000; font-weight: bold } /* Keyword */ +.highlight .o { color: #666666 } /* Operator */ +.highlight .ch { color: #3D7B7B; font-style: italic } /* Comment.Hashbang */ +.highlight .cm { color: #3D7B7B; font-style: italic } /* Comment.Multiline */ +.highlight .cp { color: #9C6500 } /* Comment.Preproc */ +.highlight .cpf { color: #3D7B7B; font-style: italic } /* Comment.PreprocFile */ +.highlight .c1 { color: #3D7B7B; font-style: italic } /* Comment.Single */ +.highlight .cs { color: #3D7B7B; font-style: italic } /* Comment.Special */ +.highlight .gd { color: #A00000 } /* Generic.Deleted */ +.highlight .ge { font-style: italic } /* Generic.Emph */ +.highlight .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */ +.highlight .gr { color: #E40000 } /* Generic.Error */ +.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */ +.highlight .gi { color: #008400 } /* Generic.Inserted */ +.highlight .go { color: #717171 } /* Generic.Output */ +.highlight .gp { color: #000080; font-weight: bold } /* Generic.Prompt */ +.highlight .gs { font-weight: bold } /* Generic.Strong */ +.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */ +.highlight .gt { color: #0044DD } /* Generic.Traceback */ +.highlight .kc { color: #008000; font-weight: bold } /* Keyword.Constant */ +.highlight .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */ +.highlight .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */ +.highlight .kp { color: #008000 } /* Keyword.Pseudo */ +.highlight .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */ +.highlight .kt { color: #B00040 } /* Keyword.Type */ +.highlight .m { color: #666666 } /* Literal.Number */ +.highlight .s { color: #BA2121 } /* Literal.String */ +.highlight .na { color: #687822 } /* Name.Attribute */ +.highlight .nb { color: #008000 } /* Name.Builtin */ +.highlight .nc { color: #0000FF; font-weight: bold } /* Name.Class */ +.highlight .no { color: #880000 } /* Name.Constant */ +.highlight .nd { color: #AA22FF } /* Name.Decorator */ +.highlight .ni { color: #717171; font-weight: bold } /* Name.Entity */ +.highlight .ne { color: #CB3F38; font-weight: bold } /* Name.Exception */ +.highlight .nf { color: #0000FF } /* Name.Function */ +.highlight .nl { color: #767600 } /* Name.Label */ +.highlight .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */ +.highlight .nt { color: #008000; font-weight: bold } /* Name.Tag */ +.highlight .nv { color: #19177C } /* Name.Variable */ +.highlight .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */ +.highlight .w { color: #bbbbbb } /* Text.Whitespace */ +.highlight .mb { color: #666666 } /* Literal.Number.Bin */ +.highlight .mf { color: #666666 } /* Literal.Number.Float */ +.highlight .mh { color: #666666 } /* Literal.Number.Hex */ +.highlight .mi { color: #666666 } /* Literal.Number.Integer */ +.highlight .mo { color: #666666 } /* Literal.Number.Oct */ +.highlight .sa { color: #BA2121 } /* Literal.String.Affix */ +.highlight .sb { color: #BA2121 } /* Literal.String.Backtick */ +.highlight .sc { color: #BA2121 } /* Literal.String.Char */ +.highlight .dl { color: #BA2121 } /* Literal.String.Delimiter */ +.highlight .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */ +.highlight .s2 { color: #BA2121 } /* Literal.String.Double */ +.highlight .se { color: #AA5D1F; font-weight: bold } /* Literal.String.Escape */ +.highlight .sh { color: #BA2121 } /* Literal.String.Heredoc */ +.highlight .si { color: #A45A77; font-weight: bold } /* Literal.String.Interpol */ +.highlight .sx { color: #008000 } /* Literal.String.Other */ +.highlight .sr { color: #A45A77 } /* Literal.String.Regex */ +.highlight .s1 { color: #BA2121 } /* Literal.String.Single */ +.highlight .ss { color: #19177C } /* Literal.String.Symbol */ +.highlight .bp { color: #008000 } /* Name.Builtin.Pseudo */ +.highlight .fm { color: #0000FF } /* Name.Function.Magic */ +.highlight .vc { color: #19177C } /* Name.Variable.Class */ +.highlight .vg { color: #19177C } /* Name.Variable.Global */ +.highlight .vi { color: #19177C } /* Name.Variable.Instance */ +.highlight .vm { color: #19177C } /* Name.Variable.Magic */ +.highlight .il { color: #666666 } /* Literal.Number.Integer.Long */ \ No newline at end of file diff --git a/_static/searchtools.js b/_static/searchtools.js new file mode 100644 index 00000000..b08d58c9 --- /dev/null +++ b/_static/searchtools.js @@ -0,0 +1,620 @@ +/* + * searchtools.js + * ~~~~~~~~~~~~~~~~ + * + * Sphinx JavaScript utilities for the full-text search. + * + * :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +/** + * Simple result scoring code. + */ +if (typeof Scorer === "undefined") { + var Scorer = { + // Implement the following function to further tweak the score for each result + // The function takes a result array [docname, title, anchor, descr, score, filename] + // and returns the new score. + /* + score: result => { + const [docname, title, anchor, descr, score, filename] = result + return score + }, + */ + + // query matches the full name of an object + objNameMatch: 11, + // or matches in the last dotted part of the object name + objPartialMatch: 6, + // Additive scores depending on the priority of the object + objPrio: { + 0: 15, // used to be importantResults + 1: 5, // used to be objectResults + 2: -5, // used to be unimportantResults + }, + // Used when the priority is not in the mapping. + objPrioDefault: 0, + + // query found in title + title: 15, + partialTitle: 7, + // query found in terms + term: 5, + partialTerm: 2, + }; +} + +const _removeChildren = (element) => { + while (element && element.lastChild) element.removeChild(element.lastChild); +}; + +/** + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping + */ +const _escapeRegExp = (string) => + string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string + +const _displayItem = (item, searchTerms, highlightTerms) => { + const docBuilder = DOCUMENTATION_OPTIONS.BUILDER; + const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX; + const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX; + const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY; + const contentRoot = document.documentElement.dataset.content_root; + + const [docName, title, anchor, descr, score, _filename] = item; + + let listItem = document.createElement("li"); + let requestUrl; + let linkUrl; + if (docBuilder === "dirhtml") { + // dirhtml builder + let dirname = docName + "/"; + if (dirname.match(/\/index\/$/)) + dirname = dirname.substring(0, dirname.length - 6); + else if (dirname === "index/") dirname = ""; + requestUrl = contentRoot + dirname; + linkUrl = requestUrl; + } else { + // normal html builders + requestUrl = contentRoot + docName + docFileSuffix; + linkUrl = docName + docLinkSuffix; + } + let linkEl = listItem.appendChild(document.createElement("a")); + linkEl.href = linkUrl + anchor; + linkEl.dataset.score = score; + linkEl.innerHTML = title; + if (descr) { + listItem.appendChild(document.createElement("span")).innerHTML = + " (" + descr + ")"; + // highlight search terms in the description + if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js + highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted")); + } + else if (showSearchSummary) + fetch(requestUrl) + .then((responseData) => responseData.text()) + .then((data) => { + if (data) + listItem.appendChild( + Search.makeSearchSummary(data, searchTerms, anchor) + ); + // highlight search terms in the summary + if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js + highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted")); + }); + Search.output.appendChild(listItem); +}; +const _finishSearch = (resultCount) => { + Search.stopPulse(); + Search.title.innerText = _("Search Results"); + if (!resultCount) + Search.status.innerText = Documentation.gettext( + "Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories." + ); + else + Search.status.innerText = _( + "Search finished, found ${resultCount} page(s) matching the search query." + ).replace('${resultCount}', resultCount); +}; +const _displayNextItem = ( + results, + resultCount, + searchTerms, + highlightTerms, +) => { + // results left, load the summary and display it + // this is intended to be dynamic (don't sub resultsCount) + if (results.length) { + _displayItem(results.pop(), searchTerms, highlightTerms); + setTimeout( + () => _displayNextItem(results, resultCount, searchTerms, highlightTerms), + 5 + ); + } + // search finished, update title and status message + else _finishSearch(resultCount); +}; +// Helper function used by query() to order search results. +// Each input is an array of [docname, title, anchor, descr, score, filename]. +// Order the results by score (in opposite order of appearance, since the +// `_displayNextItem` function uses pop() to retrieve items) and then alphabetically. +const _orderResultsByScoreThenName = (a, b) => { + const leftScore = a[4]; + const rightScore = b[4]; + if (leftScore === rightScore) { + // same score: sort alphabetically + const leftTitle = a[1].toLowerCase(); + const rightTitle = b[1].toLowerCase(); + if (leftTitle === rightTitle) return 0; + return leftTitle > rightTitle ? -1 : 1; // inverted is intentional + } + return leftScore > rightScore ? 1 : -1; +}; + +/** + * Default splitQuery function. Can be overridden in ``sphinx.search`` with a + * custom function per language. + * + * The regular expression works by splitting the string on consecutive characters + * that are not Unicode letters, numbers, underscores, or emoji characters. + * This is the same as ``\W+`` in Python, preserving the surrogate pair area. + */ +if (typeof splitQuery === "undefined") { + var splitQuery = (query) => query + .split(/[^\p{Letter}\p{Number}_\p{Emoji_Presentation}]+/gu) + .filter(term => term) // remove remaining empty strings +} + +/** + * Search Module + */ +const Search = { + _index: null, + _queued_query: null, + _pulse_status: -1, + + htmlToText: (htmlString, anchor) => { + const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html'); + for (const removalQuery of [".headerlink", "script", "style"]) { + htmlElement.querySelectorAll(removalQuery).forEach((el) => { el.remove() }); + } + if (anchor) { + const anchorContent = htmlElement.querySelector(`[role="main"] ${anchor}`); + if (anchorContent) return anchorContent.textContent; + + console.warn( + `Anchored content block not found. Sphinx search tries to obtain it via DOM query '[role=main] ${anchor}'. Check your theme or template.` + ); + } + + // if anchor not specified or not found, fall back to main content + const docContent = htmlElement.querySelector('[role="main"]'); + if (docContent) return docContent.textContent; + + console.warn( + "Content block not found. Sphinx search tries to obtain it via DOM query '[role=main]'. Check your theme or template." + ); + return ""; + }, + + init: () => { + const query = new URLSearchParams(window.location.search).get("q"); + document + .querySelectorAll('input[name="q"]') + .forEach((el) => (el.value = query)); + if (query) Search.performSearch(query); + }, + + loadIndex: (url) => + (document.body.appendChild(document.createElement("script")).src = url), + + setIndex: (index) => { + Search._index = index; + if (Search._queued_query !== null) { + const query = Search._queued_query; + Search._queued_query = null; + Search.query(query); + } + }, + + hasIndex: () => Search._index !== null, + + deferQuery: (query) => (Search._queued_query = query), + + stopPulse: () => (Search._pulse_status = -1), + + startPulse: () => { + if (Search._pulse_status >= 0) return; + + const pulse = () => { + Search._pulse_status = (Search._pulse_status + 1) % 4; + Search.dots.innerText = ".".repeat(Search._pulse_status); + if (Search._pulse_status >= 0) window.setTimeout(pulse, 500); + }; + pulse(); + }, + + /** + * perform a search for something (or wait until index is loaded) + */ + performSearch: (query) => { + // create the required interface elements + const searchText = document.createElement("h2"); + searchText.textContent = _("Searching"); + const searchSummary = document.createElement("p"); + searchSummary.classList.add("search-summary"); + searchSummary.innerText = ""; + const searchList = document.createElement("ul"); + searchList.classList.add("search"); + + const out = document.getElementById("search-results"); + Search.title = out.appendChild(searchText); + Search.dots = Search.title.appendChild(document.createElement("span")); + Search.status = out.appendChild(searchSummary); + Search.output = out.appendChild(searchList); + + const searchProgress = document.getElementById("search-progress"); + // Some themes don't use the search progress node + if (searchProgress) { + searchProgress.innerText = _("Preparing search..."); + } + Search.startPulse(); + + // index already loaded, the browser was quick! + if (Search.hasIndex()) Search.query(query); + else Search.deferQuery(query); + }, + + _parseQuery: (query) => { + // stem the search terms and add them to the correct list + const stemmer = new Stemmer(); + const searchTerms = new Set(); + const excludedTerms = new Set(); + const highlightTerms = new Set(); + const objectTerms = new Set(splitQuery(query.toLowerCase().trim())); + splitQuery(query.trim()).forEach((queryTerm) => { + const queryTermLower = queryTerm.toLowerCase(); + + // maybe skip this "word" + // stopwords array is from language_data.js + if ( + stopwords.indexOf(queryTermLower) !== -1 || + queryTerm.match(/^\d+$/) + ) + return; + + // stem the word + let word = stemmer.stemWord(queryTermLower); + // select the correct list + if (word[0] === "-") excludedTerms.add(word.substr(1)); + else { + searchTerms.add(word); + highlightTerms.add(queryTermLower); + } + }); + + if (SPHINX_HIGHLIGHT_ENABLED) { // set in sphinx_highlight.js + localStorage.setItem("sphinx_highlight_terms", [...highlightTerms].join(" ")) + } + + // console.debug("SEARCH: searching for:"); + // console.info("required: ", [...searchTerms]); + // console.info("excluded: ", [...excludedTerms]); + + return [query, searchTerms, excludedTerms, highlightTerms, objectTerms]; + }, + + /** + * execute search (requires search index to be loaded) + */ + _performSearch: (query, searchTerms, excludedTerms, highlightTerms, objectTerms) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + const allTitles = Search._index.alltitles; + const indexEntries = Search._index.indexentries; + + // Collect multiple result groups to be sorted separately and then ordered. + // Each is an array of [docname, title, anchor, descr, score, filename]. + const normalResults = []; + const nonMainIndexResults = []; + + _removeChildren(document.getElementById("search-progress")); + + const queryLower = query.toLowerCase().trim(); + for (const [title, foundTitles] of Object.entries(allTitles)) { + if (title.toLowerCase().trim().includes(queryLower) && (queryLower.length >= title.length/2)) { + for (const [file, id] of foundTitles) { + const score = Math.round(Scorer.title * queryLower.length / title.length); + const boost = titles[file] === title ? 1 : 0; // add a boost for document titles + normalResults.push([ + docNames[file], + titles[file] !== title ? `${titles[file]} > ${title}` : title, + id !== null ? "#" + id : "", + null, + score + boost, + filenames[file], + ]); + } + } + } + + // search for explicit entries in index directives + for (const [entry, foundEntries] of Object.entries(indexEntries)) { + if (entry.includes(queryLower) && (queryLower.length >= entry.length/2)) { + for (const [file, id, isMain] of foundEntries) { + const score = Math.round(100 * queryLower.length / entry.length); + const result = [ + docNames[file], + titles[file], + id ? "#" + id : "", + null, + score, + filenames[file], + ]; + if (isMain) { + normalResults.push(result); + } else { + nonMainIndexResults.push(result); + } + } + } + } + + // lookup as object + objectTerms.forEach((term) => + normalResults.push(...Search.performObjectSearch(term, objectTerms)) + ); + + // lookup as search terms in fulltext + normalResults.push(...Search.performTermsSearch(searchTerms, excludedTerms)); + + // let the scorer override scores with a custom scoring function + if (Scorer.score) { + normalResults.forEach((item) => (item[4] = Scorer.score(item))); + nonMainIndexResults.forEach((item) => (item[4] = Scorer.score(item))); + } + + // Sort each group of results by score and then alphabetically by name. + normalResults.sort(_orderResultsByScoreThenName); + nonMainIndexResults.sort(_orderResultsByScoreThenName); + + // Combine the result groups in (reverse) order. + // Non-main index entries are typically arbitrary cross-references, + // so display them after other results. + let results = [...nonMainIndexResults, ...normalResults]; + + // remove duplicate search results + // note the reversing of results, so that in the case of duplicates, the highest-scoring entry is kept + let seen = new Set(); + results = results.reverse().reduce((acc, result) => { + let resultStr = result.slice(0, 4).concat([result[5]]).map(v => String(v)).join(','); + if (!seen.has(resultStr)) { + acc.push(result); + seen.add(resultStr); + } + return acc; + }, []); + + return results.reverse(); + }, + + query: (query) => { + const [searchQuery, searchTerms, excludedTerms, highlightTerms, objectTerms] = Search._parseQuery(query); + const results = Search._performSearch(searchQuery, searchTerms, excludedTerms, highlightTerms, objectTerms); + + // for debugging + //Search.lastresults = results.slice(); // a copy + // console.info("search results:", Search.lastresults); + + // print the results + _displayNextItem(results, results.length, searchTerms, highlightTerms); + }, + + /** + * search for object names + */ + performObjectSearch: (object, objectTerms) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const objects = Search._index.objects; + const objNames = Search._index.objnames; + const titles = Search._index.titles; + + const results = []; + + const objectSearchCallback = (prefix, match) => { + const name = match[4] + const fullname = (prefix ? prefix + "." : "") + name; + const fullnameLower = fullname.toLowerCase(); + if (fullnameLower.indexOf(object) < 0) return; + + let score = 0; + const parts = fullnameLower.split("."); + + // check for different match types: exact matches of full name or + // "last name" (i.e. last dotted part) + if (fullnameLower === object || parts.slice(-1)[0] === object) + score += Scorer.objNameMatch; + else if (parts.slice(-1)[0].indexOf(object) > -1) + score += Scorer.objPartialMatch; // matches in last name + + const objName = objNames[match[1]][2]; + const title = titles[match[0]]; + + // If more than one term searched for, we require other words to be + // found in the name/title/description + const otherTerms = new Set(objectTerms); + otherTerms.delete(object); + if (otherTerms.size > 0) { + const haystack = `${prefix} ${name} ${objName} ${title}`.toLowerCase(); + if ( + [...otherTerms].some((otherTerm) => haystack.indexOf(otherTerm) < 0) + ) + return; + } + + let anchor = match[3]; + if (anchor === "") anchor = fullname; + else if (anchor === "-") anchor = objNames[match[1]][1] + "-" + fullname; + + const descr = objName + _(", in ") + title; + + // add custom score for some objects according to scorer + if (Scorer.objPrio.hasOwnProperty(match[2])) + score += Scorer.objPrio[match[2]]; + else score += Scorer.objPrioDefault; + + results.push([ + docNames[match[0]], + fullname, + "#" + anchor, + descr, + score, + filenames[match[0]], + ]); + }; + Object.keys(objects).forEach((prefix) => + objects[prefix].forEach((array) => + objectSearchCallback(prefix, array) + ) + ); + return results; + }, + + /** + * search for full-text terms in the index + */ + performTermsSearch: (searchTerms, excludedTerms) => { + // prepare search + const terms = Search._index.terms; + const titleTerms = Search._index.titleterms; + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + + const scoreMap = new Map(); + const fileMap = new Map(); + + // perform the search on the required terms + searchTerms.forEach((word) => { + const files = []; + const arr = [ + { files: terms[word], score: Scorer.term }, + { files: titleTerms[word], score: Scorer.title }, + ]; + // add support for partial matches + if (word.length > 2) { + const escapedWord = _escapeRegExp(word); + if (!terms.hasOwnProperty(word)) { + Object.keys(terms).forEach((term) => { + if (term.match(escapedWord)) + arr.push({ files: terms[term], score: Scorer.partialTerm }); + }); + } + if (!titleTerms.hasOwnProperty(word)) { + Object.keys(titleTerms).forEach((term) => { + if (term.match(escapedWord)) + arr.push({ files: titleTerms[term], score: Scorer.partialTitle }); + }); + } + } + + // no match but word was a required one + if (arr.every((record) => record.files === undefined)) return; + + // found search word in contents + arr.forEach((record) => { + if (record.files === undefined) return; + + let recordFiles = record.files; + if (recordFiles.length === undefined) recordFiles = [recordFiles]; + files.push(...recordFiles); + + // set score for the word in each file + recordFiles.forEach((file) => { + if (!scoreMap.has(file)) scoreMap.set(file, {}); + scoreMap.get(file)[word] = record.score; + }); + }); + + // create the mapping + files.forEach((file) => { + if (!fileMap.has(file)) fileMap.set(file, [word]); + else if (fileMap.get(file).indexOf(word) === -1) fileMap.get(file).push(word); + }); + }); + + // now check if the files don't contain excluded terms + const results = []; + for (const [file, wordList] of fileMap) { + // check if all requirements are matched + + // as search terms with length < 3 are discarded + const filteredTermCount = [...searchTerms].filter( + (term) => term.length > 2 + ).length; + if ( + wordList.length !== searchTerms.size && + wordList.length !== filteredTermCount + ) + continue; + + // ensure that none of the excluded terms is in the search result + if ( + [...excludedTerms].some( + (term) => + terms[term] === file || + titleTerms[term] === file || + (terms[term] || []).includes(file) || + (titleTerms[term] || []).includes(file) + ) + ) + break; + + // select one (max) score for the file. + const score = Math.max(...wordList.map((w) => scoreMap.get(file)[w])); + // add result to the result list + results.push([ + docNames[file], + titles[file], + "", + null, + score, + filenames[file], + ]); + } + return results; + }, + + /** + * helper function to return a node containing the + * search summary for a given text. keywords is a list + * of stemmed words. + */ + makeSearchSummary: (htmlText, keywords, anchor) => { + const text = Search.htmlToText(htmlText, anchor); + if (text === "") return null; + + const textLower = text.toLowerCase(); + const actualStartPosition = [...keywords] + .map((k) => textLower.indexOf(k.toLowerCase())) + .filter((i) => i > -1) + .slice(-1)[0]; + const startWithContext = Math.max(actualStartPosition - 120, 0); + + const top = startWithContext === 0 ? "" : "..."; + const tail = startWithContext + 240 < text.length ? "..." : ""; + + let summary = document.createElement("p"); + summary.classList.add("context"); + summary.textContent = top + text.substr(startWithContext, 240).trim() + tail; + + return summary; + }, +}; + +_ready(Search.init); diff --git a/_static/sg_gallery-binder.css b/_static/sg_gallery-binder.css new file mode 100644 index 00000000..420005d2 --- /dev/null +++ b/_static/sg_gallery-binder.css @@ -0,0 +1,11 @@ +/* CSS for binder integration */ + +div.binder-badge { + margin: 1em auto; + vertical-align: middle; +} + +div.lite-badge { + margin: 1em auto; + vertical-align: middle; +} diff --git a/_static/sg_gallery-dataframe.css b/_static/sg_gallery-dataframe.css new file mode 100644 index 00000000..fac74c43 --- /dev/null +++ b/_static/sg_gallery-dataframe.css @@ -0,0 +1,47 @@ +/* Pandas dataframe css */ +/* Taken from: https://github.com/spatialaudio/nbsphinx/blob/fb3ba670fc1ba5f54d4c487573dbc1b4ecf7e9ff/src/nbsphinx.py#L587-L619 */ +html[data-theme="light"] { + --sg-text-color: #000; + --sg-tr-odd-color: #f5f5f5; + --sg-tr-hover-color: rgba(66, 165, 245, 0.2); +} +html[data-theme="dark"] { + --sg-text-color: #fff; + --sg-tr-odd-color: #373737; + --sg-tr-hover-color: rgba(30, 81, 122, 0.2); +} + +table.dataframe { + border: none !important; + border-collapse: collapse; + border-spacing: 0; + border-color: transparent; + color: var(--sg-text-color); + font-size: 12px; + table-layout: fixed; + width: auto; +} +table.dataframe thead { + border-bottom: 1px solid var(--sg-text-color); + vertical-align: bottom; +} +table.dataframe tr, +table.dataframe th, +table.dataframe td { + text-align: right; + vertical-align: middle; + padding: 0.5em 0.5em; + line-height: normal; + white-space: normal; + max-width: none; + border: none; +} +table.dataframe th { + font-weight: bold; +} +table.dataframe tbody tr:nth-child(odd) { + background: var(--sg-tr-odd-color); +} +table.dataframe tbody tr:hover { + background: var(--sg-tr-hover-color); +} diff --git a/_static/sg_gallery-rendered-html.css b/_static/sg_gallery-rendered-html.css new file mode 100644 index 00000000..93dc2ffb --- /dev/null +++ b/_static/sg_gallery-rendered-html.css @@ -0,0 +1,224 @@ +/* Adapted from notebook/static/style/style.min.css */ +html[data-theme="light"] { + --sg-text-color: #000; + --sg-background-color: #ffffff; + --sg-code-background-color: #eff0f1; + --sg-tr-hover-color: rgba(66, 165, 245, 0.2); + --sg-tr-odd-color: #f5f5f5; +} +html[data-theme="dark"] { + --sg-text-color: #fff; + --sg-background-color: #121212; + --sg-code-background-color: #2f2f30; + --sg-tr-hover-color: rgba(66, 165, 245, 0.2); + --sg-tr-odd-color: #1f1f1f; +} + +.rendered_html { + color: var(--sg-text-color); + /* any extras will just be numbers: */ +} +.rendered_html em { + font-style: italic; +} +.rendered_html strong { + font-weight: bold; +} +.rendered_html u { + text-decoration: underline; +} +.rendered_html :link { + text-decoration: underline; +} +.rendered_html :visited { + text-decoration: underline; +} +.rendered_html h1 { + font-size: 185.7%; + margin: 1.08em 0 0 0; + font-weight: bold; + line-height: 1.0; +} +.rendered_html h2 { + font-size: 157.1%; + margin: 1.27em 0 0 0; + font-weight: bold; + line-height: 1.0; +} +.rendered_html h3 { + font-size: 128.6%; + margin: 1.55em 0 0 0; + font-weight: bold; + line-height: 1.0; +} +.rendered_html h4 { + font-size: 100%; + margin: 2em 0 0 0; + font-weight: bold; + line-height: 1.0; +} +.rendered_html h5 { + font-size: 100%; + margin: 2em 0 0 0; + font-weight: bold; + line-height: 1.0; + font-style: italic; +} +.rendered_html h6 { + font-size: 100%; + margin: 2em 0 0 0; + font-weight: bold; + line-height: 1.0; + font-style: italic; +} +.rendered_html h1:first-child { + margin-top: 0.538em; +} +.rendered_html h2:first-child { + margin-top: 0.636em; +} +.rendered_html h3:first-child { + margin-top: 0.777em; +} +.rendered_html h4:first-child { + margin-top: 1em; +} +.rendered_html h5:first-child { + margin-top: 1em; +} +.rendered_html h6:first-child { + margin-top: 1em; +} +.rendered_html ul:not(.list-inline), +.rendered_html ol:not(.list-inline) { + padding-left: 2em; +} +.rendered_html ul { + list-style: disc; +} +.rendered_html ul ul { + list-style: square; + margin-top: 0; +} +.rendered_html ul ul ul { + list-style: circle; +} +.rendered_html ol { + list-style: decimal; +} +.rendered_html ol ol { + list-style: upper-alpha; + margin-top: 0; +} +.rendered_html ol ol ol { + list-style: lower-alpha; +} +.rendered_html ol ol ol ol { + list-style: lower-roman; +} +.rendered_html ol ol ol ol ol { + list-style: decimal; +} +.rendered_html * + ul { + margin-top: 1em; +} +.rendered_html * + ol { + margin-top: 1em; +} +.rendered_html hr { + color: var(--sg-text-color); + background-color: var(--sg-text-color); +} +.rendered_html pre { + margin: 1em 2em; + padding: 0px; + background-color: var(--sg-background-color); +} +.rendered_html code { + background-color: var(--sg-code-background-color); +} +.rendered_html p code { + padding: 1px 5px; +} +.rendered_html pre code { + background-color: var(--sg-background-color); +} +.rendered_html pre, +.rendered_html code { + border: 0; + color: var(--sg-text-color); + font-size: 100%; +} +.rendered_html blockquote { + margin: 1em 2em; +} +.rendered_html table { + margin-left: auto; + margin-right: auto; + border: none; + border-collapse: collapse; + border-spacing: 0; + color: var(--sg-text-color); + font-size: 12px; + table-layout: fixed; +} +.rendered_html thead { + border-bottom: 1px solid var(--sg-text-color); + vertical-align: bottom; +} +.rendered_html tr, +.rendered_html th, +.rendered_html td { + text-align: right; + vertical-align: middle; + padding: 0.5em 0.5em; + line-height: normal; + white-space: normal; + max-width: none; + border: none; +} +.rendered_html th { + font-weight: bold; +} +.rendered_html tbody tr:nth-child(odd) { + background: var(--sg-tr-odd-color); +} +.rendered_html tbody tr:hover { + color: var(--sg-text-color); + background: var(--sg-tr-hover-color); +} +.rendered_html * + table { + margin-top: 1em; +} +.rendered_html p { + text-align: left; +} +.rendered_html * + p { + margin-top: 1em; +} +.rendered_html img { + display: block; + margin-left: auto; + margin-right: auto; +} +.rendered_html * + img { + margin-top: 1em; +} +.rendered_html img, +.rendered_html svg { + max-width: 100%; + height: auto; +} +.rendered_html img.unconfined, +.rendered_html svg.unconfined { + max-width: none; +} +.rendered_html .alert { + margin-bottom: initial; +} +.rendered_html * + .alert { + margin-top: 1em; +} +[dir="rtl"] .rendered_html p { + text-align: right; +} diff --git a/_static/sg_gallery.css b/_static/sg_gallery.css new file mode 100644 index 00000000..9bcd33c8 --- /dev/null +++ b/_static/sg_gallery.css @@ -0,0 +1,367 @@ +/* +Sphinx-Gallery has compatible CSS to fix default sphinx themes +Tested for Sphinx 1.3.1 for all themes: default, alabaster, sphinxdoc, +scrolls, agogo, traditional, nature, haiku, pyramid +Tested for Read the Docs theme 0.1.7 */ + +/* Define light colors */ +:root, html[data-theme="light"], body[data-theme="light"]{ + --sg-tooltip-foreground: black; + --sg-tooltip-background: rgba(250, 250, 250, 0.9); + --sg-tooltip-border: #ccc transparent; + --sg-thumb-box-shadow-color: #6c757d40; + --sg-thumb-hover-border: #0069d9; + --sg-script-out: #888; + --sg-script-pre: #fafae2; + --sg-pytb-foreground: #000; + --sg-pytb-background: #ffe4e4; + --sg-pytb-border-color: #f66; + --sg-download-a-background-color: #ffc; + --sg-download-a-background-image: linear-gradient(to bottom, #ffc, #d5d57e); + --sg-download-a-border-color: 1px solid #c2c22d; + --sg-download-a-color: #000; + --sg-download-a-hover-background-color: #d5d57e; + --sg-download-a-hover-box-shadow-1: rgba(255, 255, 255, 0.1); + --sg-download-a-hover-box-shadow-2: rgba(0, 0, 0, 0.25); +} +@media(prefers-color-scheme: light) { + :root[data-theme="auto"], html[data-theme="auto"], body[data-theme="auto"] { + --sg-tooltip-foreground: black; + --sg-tooltip-background: rgba(250, 250, 250, 0.9); + --sg-tooltip-border: #ccc transparent; + --sg-thumb-box-shadow-color: #6c757d40; + --sg-thumb-hover-border: #0069d9; + --sg-script-out: #888; + --sg-script-pre: #fafae2; + --sg-pytb-foreground: #000; + --sg-pytb-background: #ffe4e4; + --sg-pytb-border-color: #f66; + --sg-download-a-background-color: #ffc; + --sg-download-a-background-image: linear-gradient(to bottom, #ffc, #d5d57e); + --sg-download-a-border-color: 1px solid #c2c22d; + --sg-download-a-color: #000; + --sg-download-a-hover-background-color: #d5d57e; + --sg-download-a-hover-box-shadow-1: rgba(255, 255, 255, 0.1); + --sg-download-a-hover-box-shadow-2: rgba(0, 0, 0, 0.25); + } +} + +html[data-theme="dark"], body[data-theme="dark"] { + --sg-tooltip-foreground: white; + --sg-tooltip-background: rgba(10, 10, 10, 0.9); + --sg-tooltip-border: #333 transparent; + --sg-thumb-box-shadow-color: #79848d40; + --sg-thumb-hover-border: #003975; + --sg-script-out: rgb(179, 179, 179); + --sg-script-pre: #2e2e22; + --sg-pytb-foreground: #fff; + --sg-pytb-background: #1b1717; + --sg-pytb-border-color: #622; + --sg-download-a-background-color: #443; + --sg-download-a-background-image: linear-gradient(to bottom, #443, #221); + --sg-download-a-border-color: 1px solid #3a3a0d; + --sg-download-a-color: #fff; + --sg-download-a-hover-background-color: #616135; + --sg-download-a-hover-box-shadow-1: rgba(0, 0, 0, 0.1); + --sg-download-a-hover-box-shadow-2: rgba(255, 255, 255, 0.25); +} +@media(prefers-color-scheme: dark){ + html[data-theme="auto"], body[data-theme="auto"] { + --sg-tooltip-foreground: white; + --sg-tooltip-background: rgba(10, 10, 10, 0.9); + --sg-tooltip-border: #333 transparent; + --sg-thumb-box-shadow-color: #79848d40; + --sg-thumb-hover-border: #003975; + --sg-script-out: rgb(179, 179, 179); + --sg-script-pre: #2e2e22; + --sg-pytb-foreground: #fff; + --sg-pytb-background: #1b1717; + --sg-pytb-border-color: #622; + --sg-download-a-background-color: #443; + --sg-download-a-background-image: linear-gradient(to bottom, #443, #221); + --sg-download-a-border-color: 1px solid #3a3a0d; + --sg-download-a-color: #fff; + --sg-download-a-hover-background-color: #616135; + --sg-download-a-hover-box-shadow-1: rgba(0, 0, 0, 0.1); + --sg-download-a-hover-box-shadow-2: rgba(255, 255, 255, 0.25); + } +} + +.sphx-glr-thumbnails { + width: 100%; + margin: 0px 0px 20px 0px; + + /* align thumbnails on a grid */ + justify-content: space-between; + display: grid; + /* each grid column should be at least 160px (this will determine + the actual number of columns) and then take as much of the + remaining width as possible */ + grid-template-columns: repeat(auto-fill, minmax(160px, 1fr)); + gap: 15px; +} +.sphx-glr-thumbnails .toctree-wrapper { + /* hide empty toctree divs added to the DOM + by sphinx even though the toctree is hidden + (they would fill grid places with empty divs) */ + display: none; +} +.sphx-glr-thumbcontainer { + background: transparent; + -moz-border-radius: 5px; + -webkit-border-radius: 5px; + border-radius: 5px; + box-shadow: 0 0 10px var(--sg-thumb-box-shadow-color); + + /* useful to absolutely position link in div */ + position: relative; + + /* thumbnail width should include padding and borders + and take all available space */ + box-sizing: border-box; + width: 100%; + padding: 10px; + border: 1px solid transparent; + + /* align content in thumbnail */ + display: flex; + flex-direction: column; + align-items: center; + gap: 7px; +} +.sphx-glr-thumbcontainer p { + position: absolute; + top: 0; + left: 0; +} +.sphx-glr-thumbcontainer p, +.sphx-glr-thumbcontainer p a { + /* link should cover the whole thumbnail div */ + width: 100%; + height: 100%; +} +.sphx-glr-thumbcontainer p a span { + /* text within link should be masked + (we are just interested in the href) */ + display: none; +} +.sphx-glr-thumbcontainer:hover { + border: 1px solid; + border-color: var(--sg-thumb-hover-border); + cursor: pointer; +} +.sphx-glr-thumbcontainer a.internal { + bottom: 0; + display: block; + left: 0; + box-sizing: border-box; + padding: 150px 10px 0; + position: absolute; + right: 0; + top: 0; +} +/* Next one is to avoid Sphinx traditional theme to cover all the +thumbnail with its default link Background color */ +.sphx-glr-thumbcontainer a.internal:hover { + background-color: transparent; +} + +.sphx-glr-thumbcontainer p { + margin: 0 0 0.1em 0; +} +.sphx-glr-thumbcontainer .figure { + margin: 10px; + width: 160px; +} +.sphx-glr-thumbcontainer img { + display: inline; + max-height: 112px; + max-width: 160px; +} + +.sphx-glr-thumbcontainer[tooltip]::before { + content: ""; + position: absolute; + pointer-events: none; + top: 0; + left: 0; + width: 100%; + height: 100%; + z-index: 97; + background-color: var(--sg-tooltip-background); + backdrop-filter: blur(3px); + opacity: 0; + transition: opacity 0.3s; +} + +.sphx-glr-thumbcontainer[tooltip]:hover::before { + opacity: 1; +} + +.sphx-glr-thumbcontainer[tooltip]:hover::after { + -webkit-border-radius: 4px; + -moz-border-radius: 4px; + border-radius: 4px; + color: var(--sg-tooltip-foreground); + content: attr(tooltip); + padding: 10px 10px 5px; + z-index: 98; + width: 100%; + max-height: 100%; + position: absolute; + pointer-events: none; + top: 0; + box-sizing: border-box; + overflow: hidden; + display: -webkit-box; + -webkit-box-orient: vertical; + -webkit-line-clamp: 6; +} + +.sphx-glr-script-out { + color: var(--sg-script-out); + display: flex; + gap: 0.5em; +} +.sphx-glr-script-out::before { + content: "Out:"; + /* These numbers come from the pre style in the pydata sphinx theme. This + * turns out to match perfectly on the rtd theme, but be a bit too low for + * the pydata sphinx theme. As I could not find a dimension to use that was + * scaled the same way, I just picked one option that worked pretty close for + * both. */ + line-height: 1.4; + padding-top: 10px; +} +.sphx-glr-script-out .highlight { + background-color: transparent; + /* These options make the div expand... */ + flex-grow: 1; + /* ... but also keep it from overflowing its flex container. */ + overflow: auto; +} +.sphx-glr-script-out .highlight pre { + background-color: var(--sg-script-pre); + border: 0; + max-height: 30em; + overflow: auto; + padding-left: 1ex; + /* This margin is necessary in the pydata sphinx theme because pre has a box + * shadow which would be clipped by the overflow:auto in the parent div + * above. */ + margin: 2px; + word-break: break-word; +} +.sphx-glr-script-out + p { + margin-top: 1.8em; +} +blockquote.sphx-glr-script-out { + margin-left: 0pt; +} +.sphx-glr-script-out.highlight-pytb .highlight pre { + color: var(--sg-pytb-foreground); + background-color: var(--sg-pytb-background); + border: 1px solid var(--sg-pytb-border-color); + margin-top: 10px; + padding: 7px; +} + +div.sphx-glr-footer { + text-align: center; +} + +div.sphx-glr-download { + margin: 1em auto; + vertical-align: middle; +} + +div.sphx-glr-download a { + background-color: var(--sg-download-a-background-color); + background-image: var(--sg-download-a-background-image); + border-radius: 4px; + border: 1px solid var(--sg-download-a-border-color); + color: var(--sg-download-a-color); + display: inline-block; + font-weight: bold; + padding: 1ex; + text-align: center; +} + +div.sphx-glr-download code.download { + display: inline-block; + white-space: normal; + word-break: normal; + overflow-wrap: break-word; + /* border and background are given by the enclosing 'a' */ + border: none; + background: none; +} + +div.sphx-glr-download a:hover { + box-shadow: inset 0 1px 0 var(--sg-download-a-hover-box-shadow-1), 0 1px 5px var(--sg-download-a-hover-box-shadow-2); + text-decoration: none; + background-image: none; + background-color: var(--sg-download-a-hover-background-color); +} + +div.sphx-glr-sidebar-item img { + max-height: 20px; +} + +.sphx-glr-example-title:target::before { + display: block; + content: ""; + margin-top: -50px; + height: 50px; + visibility: hidden; +} + +ul.sphx-glr-horizontal { + list-style: none; + padding: 0; +} +ul.sphx-glr-horizontal li { + display: inline; +} +ul.sphx-glr-horizontal img { + height: auto !important; +} + +.sphx-glr-single-img { + margin: auto; + display: block; + max-width: 100%; +} + +.sphx-glr-multi-img { + max-width: 42%; + height: auto; +} + +div.sphx-glr-animation { + margin: auto; + display: block; + max-width: 100%; +} +div.sphx-glr-animation .animation { + display: block; +} + +p.sphx-glr-signature a.reference.external { + -moz-border-radius: 5px; + -webkit-border-radius: 5px; + border-radius: 5px; + padding: 3px; + font-size: 75%; + text-align: right; + margin-left: auto; + display: table; +} + +.sphx-glr-clear { + clear: both; +} + +a.sphx-glr-backref-instance { + text-decoration: none; +} diff --git a/_static/sphinx_highlight.js b/_static/sphinx_highlight.js new file mode 100644 index 00000000..8a96c69a --- /dev/null +++ b/_static/sphinx_highlight.js @@ -0,0 +1,154 @@ +/* Highlighting utilities for Sphinx HTML documentation. */ +"use strict"; + +const SPHINX_HIGHLIGHT_ENABLED = true + +/** + * highlight a given string on a node by wrapping it in + * span elements with the given class name. + */ +const _highlight = (node, addItems, text, className) => { + if (node.nodeType === Node.TEXT_NODE) { + const val = node.nodeValue; + const parent = node.parentNode; + const pos = val.toLowerCase().indexOf(text); + if ( + pos >= 0 && + !parent.classList.contains(className) && + !parent.classList.contains("nohighlight") + ) { + let span; + + const closestNode = parent.closest("body, svg, foreignObject"); + const isInSVG = closestNode && closestNode.matches("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.classList.add(className); + } + + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + const rest = document.createTextNode(val.substr(pos + text.length)); + parent.insertBefore( + span, + parent.insertBefore( + rest, + node.nextSibling + ) + ); + node.nodeValue = val.substr(0, pos); + /* There may be more occurrences of search term in this node. So call this + * function recursively on the remaining fragment. + */ + _highlight(rest, addItems, text, className); + + if (isInSVG) { + const rect = document.createElementNS( + "http://www.w3.org/2000/svg", + "rect" + ); + const bbox = parent.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute("class", className); + addItems.push({ parent: parent, target: rect }); + } + } + } else if (node.matches && !node.matches("button, select, textarea")) { + node.childNodes.forEach((el) => _highlight(el, addItems, text, className)); + } +}; +const _highlightText = (thisNode, text, className) => { + let addItems = []; + _highlight(thisNode, addItems, text, className); + addItems.forEach((obj) => + obj.parent.insertAdjacentElement("beforebegin", obj.target) + ); +}; + +/** + * Small JavaScript module for the documentation. + */ +const SphinxHighlight = { + + /** + * highlight the search words provided in localstorage in the text + */ + highlightSearchWords: () => { + if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight + + // get and clear terms from localstorage + const url = new URL(window.location); + const highlight = + localStorage.getItem("sphinx_highlight_terms") + || url.searchParams.get("highlight") + || ""; + localStorage.removeItem("sphinx_highlight_terms") + url.searchParams.delete("highlight"); + window.history.replaceState({}, "", url); + + // get individual terms from highlight string + const terms = highlight.toLowerCase().split(/\s+/).filter(x => x); + if (terms.length === 0) return; // nothing to do + + // There should never be more than one element matching "div.body" + const divBody = document.querySelectorAll("div.body"); + const body = divBody.length ? divBody[0] : document.querySelector("body"); + window.setTimeout(() => { + terms.forEach((term) => _highlightText(body, term, "highlighted")); + }, 10); + + const searchBox = document.getElementById("searchbox"); + if (searchBox === null) return; + searchBox.appendChild( + document + .createRange() + .createContextualFragment( + '" + ) + ); + }, + + /** + * helper function to hide the search marks again + */ + hideSearchWords: () => { + document + .querySelectorAll("#searchbox .highlight-link") + .forEach((el) => el.remove()); + document + .querySelectorAll("span.highlighted") + .forEach((el) => el.classList.remove("highlighted")); + localStorage.removeItem("sphinx_highlight_terms") + }, + + initEscapeListener: () => { + // only install a listener if it is really needed + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return; + if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) { + SphinxHighlight.hideSearchWords(); + event.preventDefault(); + } + }); + }, +}; + +_ready(() => { + /* Do not call highlightSearchWords() when we are on the search page. + * It will highlight words from the *previous* search query. + */ + if (typeof Search === "undefined") SphinxHighlight.highlightSearchWords(); + SphinxHighlight.initEscapeListener(); +}); diff --git a/concepts/catalogs.html b/concepts/catalogs.html new file mode 100644 index 00000000..881bef90 --- /dev/null +++ b/concepts/catalogs.html @@ -0,0 +1,573 @@ + + + + + + + + + Catalogs — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Catalogs

+

PyCSEP provides routines for working with and manipulating earthquake catalogs for the purposes of evaluating earthquake +forecasting models.

+

If you are able to make use of these tools for other reasons, please let us know. We are especially interested in +including basic catalog statistics into this package. If you are interested in helping implement routines like +b-value estimation and catalog completeness that would be much appreciated.

+ +
+

Introduction

+
+

PyCSEP catalog basics

+

An earthquake catalog contains a collection of seismic events each defined by a set of attributes. PyCSEP implements +a simple event description that is suitable for evaluating earthquake forecasts. In this format, every seismic event is +defined by its location in space (longitude, latitude, and depth), magnitude, and origin time. In addition, each event +can have an optional event_id as a unique identifier.

+

PyCSEP provides csep.core.catalogs.CSEPCatalog to represent an earthquake catalog. The essential event data are stored in a +structured NumPy array with the following data type.

+
dtype = numpy.dtype([('id', 'S256'),
+                     ('origin_time', '<i8'),
+                     ('latitude', '<f4'),
+                     ('longitude', '<f4'),
+                     ('depth', '<f4'),
+                     ('magnitude', '<f4')])
+
+
+

Additional information can be associated with an event using the id field in the structured array in a class member called +metadata. The essential event data must be complete meaning that each event should have these attributes defined. The metadata +is more freeform and PyCSEP does not impose any restrictions on the way that event metadata is stored. Only that the metadata +for an event should be accessible using the event id. An example of this could be

+
catalog = csep.load_catalog('catalog_file.csv')
+event_metadata = catalog.metadata[event_id]
+
+
+

This would load a catalog stored in the PyCSEP .csv format. PyCSEP contains catalog readers for the following formats +(We are also looking to support other catalog formats. Please suggest some or better yet help us write the readers!):

+
    +
  1. CSEP ascii format

  2. +
  3. NDK format (used by the gCMT catalog)

  4. +
  5. INGV gCMT catalog

  6. +
  7. ZMAP format

  8. +
  9. pre-processed JMA format

  10. +
+

PyCSEP supports the ability to easily define a custom reader function for a catalog format type that we don’t currently support. +If you happen to implement a reader for a new catalog format please check out the +contribution guidelines and make a pull request so +we can include this in the next release.

+
+
+

Catalog as Pandas dataframes

+

You might be comfortable using Pandas dataframes to manipulate tabular data. PyCSEP provides some routines for accessing +catalogs as a pandas.DataFrame. You can use df = catalog.to_dataframe(with_datetimes=True) to return the +DataFrame representation of the catalog. Using the catalog = CSEPCatalog.from_dataframe(df) you can return back to the +PyCSEP data model.

+
+

Note

+

Going between a DataFrame and CSEPCatalog is a lossy transformation. It essentially only retains the essential event +attributes that are defined by the dtype of the class.

+
+
+
+
+

Loading catalogs

+
+

Load catalogs from files

+

You can easily load catalogs in the supported format above using csep.load_catalog(). This function provides a +top-level function to load catalogs that are currently supported by PyCSEP. You must specify the type of the catalog and +the format you want it to be loaded. The type of the catalog can be:

+
catalog_type = ('ucerf3', 'csep-csv', 'zmap', 'jma-csv', 'ndk')
+catalog_format = ('csep', 'native')
+
+
+

The catalog type determines which reader csep.utils.readers will be used to load in the file. The default is the +csep-csv type and the native format. The jma-csv format can be created using the ./bin/deck2csv.pl +Perl script.

+
+

Note

+

The format is important for ucerf3 catalogs, because those are stored as big endian binary numbers by default. +If you are working with ucerf3-etas catalogs and would like to convert them into the CSEPCatalog format you can +use the format='csep' option when loading in a catalog or catalogs.

+
+
+
+

Load catalogs from ComCat

+

PyCSEP provides top-level functions to load catalogs using ComCat. We incorporated the work done by Mike Hearne and +others from the U.S. Geological Survey into PyCSEP in an effort to reduce the dependencies of this project. The top-level +access to ComCat catalogs can be accessed from csep.query_comcat(). Some lower level functionality can be accessed +through the csep.utils.comcat module. All credit for this code goes to the U.S. Geological Survey.

+

Here is complete example of accessing the ComCat catalog.

+
+
+

Writing custom loader functions

+

You can easily add custom loader functions to import data formats that are not currently included with the PyCSEP tools. +Both csep.core.catalogs.CSEPCatalog.load_catalog() and csep.load_catalog() support an optional argument +called loader to support these custom data formats.

+

In the simplest form the function should have the following stub:

+
def my_custom_loader_function(filename):
+    """ Custom loader function for catalog data.
+
+    Args:
+        filename (str): path to the file containing the path to the forecast
+
+    Returns:
+        eventlist: iterable of event data with the order:
+                (event_id, origin_time, latitude, longitude, depth, magnitude)
+    """
+
+    # imagine there is some logic to read in data from filename
+
+    return eventlist
+
+
+

This function can then be passed to csep.load_catalog() or CSEPCatalog.load_catalog +with the loader keyword argument. The function should be passed as a first-class object like this:

+
import csep
+my_custom_catalog = csep.load_catalog(filename, loader=my_custom_loader_function)
+
+
+
+

Note

+

The origin_time is actually an integer time. We recommend to parse the timing information as a +datetime.datetime object and use the datetime_to_utc_epoch +function to convert this to an integer time.

+
+

Notice, we did not actually call the function but we just passed it as a reference. These functions can also access +web-based catalogs like we implement with the csep.query_comcat() function. This function doesn’t work with either +csep.load_catalog() or CSEPCatalog.load_catalog, +because these are intended for file-based catalogs. Instead, we can create the catalog object directly. +We would do that like this

+
def my_custom_web_loader(...):
+    """ Accesses catalog from online data source.
+
+    There are no requirements on the arguments if you are creating the catalog directly from the class.
+
+    Returns:
+        eventlist: iterable of event data with the order:
+            (event_id, origin_time, latitude, longitude, depth, magnitude)
+    """
+
+    # custom logic to access online catalog repository
+
+    return eventlist
+
+
+

As you might notice, all loader functions are required to return an event-list. This event-list must be iterable and +contain the required event data.

+
+

Note

+

The events in the eventlist should follow the form

+
eventlist = my_custom_loader_function(...)
+
+event = eventlist[0]
+
+event[0] = event_id
+# see note above about using integer times
+event[1] = origin_time
+event[2] = latitude
+event[3] = longitude
+event[4] = depth
+event[5] = magnitude
+
+
+
+

Once you have a function that returns an eventlist, you can create the catalog object directly. This uses the +csep.core.catalogs.CSEPCatalog as an example.

+
import csep
+
+eventlist = my_custom_web_loader(...)
+catalog = csep.catalogs.CSEPCatalog(data=eventlist, **kwargs)
+
+
+

The **kwargs represents any other keyword argument that can be passed to +CSEPCatalog. This could be the catalog_id or the +CartesianGrid2D.

+
+
+

Including custom event metadata

+

Catalogs can include additional metadata associated with each event. Right now, there are no direct applications for +event metadata. Nonetheless, it can be included with a catalog object.

+

The event metadata should be a dictionary where the keys are the event_id of the individual events. For example,

+
event_id = 'my_dummy_id'
+metadata_dict = catalog.metadata[event_id]
+
+
+

Each event meta_data should be a JSON-serializable dictionary or a class that implements the to_dict() and from_dict() methods. This is +required to properly save the catalog files into JSON format and verify whether two catalogs are the same. You can see +the to_dict and from_dict methods for an +example of how these would work.

+
+
+
+

Accessing Event Information

+

In order to utilize the low-level acceleration from Numpy, most catalog operations are vectorized. The catalog classes +provide some getter methods to access the essential catalog data. These return arrays of numpy.ndarray with the +dtype defined by the class.

+

The following functions return numpy.ndarrays of the catalog information.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + +

CSEPCatalog.event_count

Number of events in catalog

CSEPCatalog.get_magnitudes()

Returns magnitudes of all events in catalog

CSEPCatalog.get_longitudes()

Returns longitudes of all events in catalog

CSEPCatalog.get_latitudes()

Returns latitudes of all events in catalog

CSEPCatalog.get_depths()

Returns depths of all events in catalog

CSEPCatalog.get_epoch_times()

Returns the datetime of the event as the UTC epoch time (aka unix timestamp)

CSEPCatalog.get_datetimes()

Returns datetime object from timestamp representation in catalog

CSEPCatalog.get_cumulative_number_of_events()

Returns the cumulative number of events in the catalog.

+

The catalog data can be iterated through event-by-event using a standard for-loop. For example, we can do something +like

+
for event in catalog.data:
+    print(
+        event['id'],
+        event['origin_time'],
+        event['latitude'],
+        event['longitude'],
+        event['depth'],
+        event['magnitude']
+    )
+
+
+

The keyword for the event tuple are defined by the dtype of the class. The keywords for +CSEPCatalog are shown in the snippet directly above. For example, a quick and +dirty plot of the cumulative events over time can be made using the matplotlib.pyplot interface

+
import csep
+import matplotlib.pyplot as plt
+
+# lets assume we already loaded in some catalog
+catalog = csep.load_catalog("my_catalog_path.csv")
+
+# quick and dirty plot
+fig, ax = plt.subplots()
+ax.plot(catalog.get_epoch_times(), catalog.get_cumulative_number_of_events())
+plt.show()
+
+
+
+
+

Filtering events

+

Most of the catalog files (or catalogs accessed via the web) contain more events that are desired for a given use case. +PyCSEP provides a few routines to help filter events out of the catalog. The following methods help to filter out +unwanted events from the catalog.

+ + + + + + + + + + + + +

CSEPCatalog.filter([statements, in_place])

Filters the catalog based on statements.

CSEPCatalog.filter_spatial([region, ...])

Removes events outside of the region.

CSEPCatalog.apply_mct(m_main, event_epoch[, mc])

Applies time-dependent magnitude of completeness following a mainshock.

+
+

Filtering events by attribute

+

The function CSEPCatalog.filter provides the ability to filter events +based on their essential attributes. This function works by parsing filtering strings and applying them using a logical +and operation. The catalog strings have the following format filter_string = f"{attribute} {operator} {value}". The +filter strings represent a statement that would evaluate as True after they are applied. For example, the statement +catalog.filter('magnitude >= 2.5') would retain all events in the catalog greater-than-or-equal to magnitude 2.5.

+

The attributes are determined by the dtype of the catalog, therefore you can filter based on the (origin_time, latitude, +longitude, depth, and magnitude). Additionally, you can use the attribute datetime and provide a datetime.datetime +object to filter events using the data type.

+

The filter function can accept a string or a list of filter statements. If the function is called without any arguments +the function looks to use the catalog.filters member. This can be provided during class instantiation or bound +to the class afterward. Here is complete example of how to filter a catalog +using the filtering strings.

+
+
+

Filtering events in space

+

You might want to supply a non-rectangular polygon that can be used to filter events in space. This is commonly done +to prepare an observed catalog for forecast evaluation. Right now, this can be accomplished by supplying a +region to the catalog or +filter_spatial. There will be more information about using regions +in the user-guide page page. The catalog filtering contains +a complete example of how to filter a catalog using a user defined aftershock region based on the M7.1 Ridgecrest +mainshock.

+
+
+

Time-dependent magnitude of completeness

+

Seismic networks have difficulty recording events immediately after a large event occurs, because the passing seismic waves +from the larger event become mixed with any potential smaller events. Usually when we evaluate an aftershock forecast, we should +account for this time-dependent magnitude of completeness. PyCSEP provides the +Helmstetter et al., [2006] implementation of the time-dependent magnitude completeness model.

+

This requires information about an event which can be supplied directly to apply_mct. +Additionally, PyCSEP provides access to the ComCat API using get_event_by_id. +An exmaple of this can be seen in the filtering catalog tutorial.

+
+
+
+

Binning Events

+

Another common task requires binning earthquakes by their spatial locations and magnitudes. This is routinely done when +evaluating earthquake forecasts. Like filtering a catalog in space, you need to provide some information about the region +that will be used for the binning. Please see the user-guide page for more information about +regions.

+
+

Note

+

We would like to make this functionality more user friendly. If you have suggestions or struggles, please open an issue +on the GitHub page and we’d be happy to incorporate these ideas into the toolkit.

+
+

The following functions allow binning of catalogs using space-magnitude regions.

+ + + + + + + + + + + + +

CSEPCatalog.spatial_counts()

Returns counts of events within discrete spatial region

CSEPCatalog.magnitude_counts([mag_bins, ...])

Computes the count of events within mag_bins

CSEPCatalog.spatial_magnitude_counts([...])

Return counts of events in space-magnitude region.

+

These functions return numpy.ndarrays containing the count of the events determined from the +catalogs. This example shows how to obtain magnitude counts from a catalog. +The index of the ndarray corresponds to the index of the associated space-magnitude region. For example,

+
import csep
+import numpy
+
+catalog = csep.load_catalog("my_catalog_file")
+
+# returns bin edges [2.5, 2.6, ... , 7.5]
+bin_edges = numpy.arange(2.5, 7.55, 0.1)
+
+magnitude_counts = catalog.magnitude_counts(mag_bins=bin_edges)
+
+
+

In this example, magnitude_counts[0] is the number of events with 2.5 ≤ M < 2.6. All of the magnitude binning assumes +that the final bin extends to infinity, therefore magnitude_counts[-1] contains the number of events with +7.5 ≤ M < ∞.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/concepts/evaluations.html b/concepts/evaluations.html new file mode 100644 index 00000000..80b859cf --- /dev/null +++ b/concepts/evaluations.html @@ -0,0 +1,483 @@ + + + + + + + + + Evaluations — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Evaluations

+

PyCSEP provides routines to evaluate both gridded and catalog-based earthquake forecasts. This page explains how to use +the forecast evaluation routines and also how to build “mock” forecast and catalog classes to accommodate different +custom forecasts and catalogs.

+ +
+

Gridded-forecast evaluations

+

Grid-based earthquake forecasts assume earthquakes occur in discrete space-time-magnitude bins and their rate-of- +occurrence can be defined using a single number in each magnitude bin. Each space-time-magnitude bin is assumed to be +an independent Poisson random variable. Therefore, we use likelihood-based evaluation metrics to compare these +forecasts against observations.

+

PyCSEP provides two groups of evaluation metrics for grid-based earthquake forecasts. The first are known as +consistency tests and they verify whether a forecast in consistent with an observation. The second are comparative tests +that can be used to compare the performance of two (or more) competing forecasts. +PyCSEP implements the following evaluation routines for grid-based forecasts. These functions are intended to work with +GriddedForecasts and CSEPCatalogs`. +Visit the catalogs reference and the forecasts reference to learn +more about to import your forecasts and catalogs into PyCSEP.

+
+

Note

+

Grid-based forecast evaluations act directly on the forecasts and catalogs as they are supplied to the function. +Any filtering of catalogs and/or scaling of forecasts must be done before calling the function. +This must be done before evaluating the forecast and should be done consistently between all forecasts that are being +compared.

+
+

See the example for gridded forecast evaluation for an end-to-end walkthrough on how +to evaluate a gridded earthquake forecast.

+
+

Consistency tests

+ + + + + + + + + + + + + + + + + + +

number_test(gridded_forecast, observed_catalog)

Computes "N-Test" on a gridded forecast.

magnitude_test(gridded_forecast, ...[, ...])

Performs the Magnitude Test on a Gridded Forecast using an observed catalog.

spatial_test(gridded_forecast, observed_catalog)

Performs the Spatial Test on the Forecast using the Observed Catalogs.

likelihood_test(gridded_forecast, ...[, ...])

Performs the likelihood test on Gridded Forecast using an Observed Catalog.

conditional_likelihood_test(...[, ...])

Performs the conditional likelihood test on Gridded Forecast using an Observed Catalog.

+
+
+

Comparative tests

+ + + + + + + + + +

paired_t_test(forecast, benchmark_forecast, ...)

Computes the t-test for gridded earthquake forecasts.

w_test(gridded_forecast1, gridded_forecast2, ...)

Calculate the Single Sample Wilcoxon signed-rank test between two gridded forecasts.

+
+
+

Publication references

+
    +
  1. Number test (Schorlemmer et al., 2007; Zechar et al., 2010)

  2. +
  3. Magnitude test (Zechar et al., 2010)

  4. +
  5. Spatial test (Zechar et al., 2010)

  6. +
  7. Likelihood test (Schorlemmer et al., 2007; Zechar et al., 2010)

  8. +
  9. Conditional likelihood test (Werner et al., 2011)

  10. +
  11. Paired t test (Rhoades et al., 2011)

  12. +
  13. Wilcoxon signed-rank test (Rhoades et al., 2011)

  14. +
+
+
+
+

Catalog-based forecast evaluations

+

Catalog-based forecasts are issued as a family of stochastic event sets (synthetic earthquake catalogs) and can express +the full uncertainty of the forecasting model. Additionally, these forecasts retain the inter-event dependencies that +are lost when using discrete space-time-magnitude grids. This problem can impact the evaluation performance of +time-dependent forecasts like the epidemic type aftershock sequence model (ETAS).

+

In order to support generative or simulator-based models, we define a suite of consistency tests that compare forecasted +distributions against observations without the use of a parametric likelihood function. These evaluations take advantage +of the fact that the forecast and the observations are both earthquake catalogs. Therefore, we can compute identical +statistics from these catalogs and compare them against one another.

+

We provide four statistics that probe fundamental aspects of the earthquake forecasts. Please see +Savran et al., 2020 for a complete description of the individual tests. For the implementation +details please follow the links below and see the example for catalog-based +forecast evaluation for an end-to-end walk through.

+
+

Consistency tests

+ + + + + + + + + + + + + + + + + + +

number_test(forecast, observed_catalog[, ...])

Performs the number test on a catalog-based forecast.

spatial_test(forecast, observed_catalog[, ...])

Performs spatial test for catalog-based forecasts.

magnitude_test(forecast, observed_catalog[, ...])

Performs magnitude test for catalog-based forecasts

pseudolikelihood_test(forecast, observed_catalog)

Performs the spatial pseudolikelihood test for catalog forecasts.

calibration_test(evaluation_results[, delta_1])

Perform the calibration test by computing a Kilmogorov-Smirnov test of the observed quantiles against a uniform distribution.

+
+
+

Publication reference

+
    +
  1. Number test (Savran et al., 2020)

  2. +
  3. Spatial test (Savran et al., 2020)

  4. +
  5. Magnitude test (Savran et al., 2020)

  6. +
  7. Pseudolikelihood test (Savran et al., 2020)

  8. +
  9. Calibration test (Savran et al., 2020)

  10. +
+
+
+
+

Preparing evaluation catalog

+

The evaluations in PyCSEP do not implicitly filter the observed catalogs or modify the forecast data when called. For most +cases, the observation catalog should be filtered according to:

+
+
    +
  1. Magnitude range of the forecast

  2. +
  3. Spatial region of the forecast

  4. +
  5. Start and end-time of the forecast

  6. +
+
+

Once the observed catalog is filtered so it is consistent in space, time, and magnitude as the forecast, it can be used +to evaluate a forecast. A single evaluation catalog can be used to evaluate multiple forecasts so long as they all cover +the same space, time, and magnitude region.

+
+
+

Building mock classes

+

Python is a duck-typed language which means that it doesn’t care what the object type is only that it has the methods or +functions that are expected when that object is used. This can come in handy if you want to use the evaluation methods, but +do not have a forecast that completely fits with the forecast classes (or catalog classes) provided by PyCSEP.

+
+

Note

+

Something about great power and great responsibility… For the most reliable results, write a loader function that +can ingest your forecast into the model provided by PyCSEP. Mock-classes can work, but should only be used in certain +circumstances. In particular, they are very useful for writing software tests or to prototype features that can +be added into the package.

+
+

This section will walk you through how to compare two forecasts using the paired_t_test +with mock forecast and catalog classes. This sounds much more complex than it really is, and it gives you the flexibility +to use your own formats and interact with the tools provided by PyCSEP.

+
+

Warning

+

The simulation-based Poisson tests (magnitude_test, likelihood_test, conditional_likelihood_test, and spatial_test) +are optimized to work with forecasts that contain equal-sized spatial bins. If your forecast uses variable sized spatial +bins you will get incorrect results. If you are working with forecasts that have variable spatial bins, create an +issue on GitHub because we’d like to implement this feature into the toolkit and we’d love your help.

+
+

If we look at the paired_t_test we see that it has the following code

+
def paired_t_test(gridded_forecast1, gridded_forecast2, observed_catalog, alpha=0.05, scale=False):
+    """ Computes the t-test for gridded earthquake forecasts.
+
+    Args:
+        gridded_forecast_1 (csep.core.forecasts.GriddedForecast): nd-array storing gridded rates, axis=-1 should be the magnitude column
+        gridded_forecast_2 (csep.core.forecasts.GriddedForecast): nd-array storing gridded rates, axis=-1 should be the magnitude column
+        observed_catalog (csep.core.catalogs.AbstractBaseCatalog): number of observed earthquakes, should be whole number and >= zero.
+        alpha (float): tolerance level for the type-i error rate of the statistical test
+        scale (bool): if true, scale forecasted rates down to a single day
+
+    Returns:
+        evaluation_result: csep.core.evaluations.EvaluationResult
+    """
+
+    # needs some pre-processing to put the forecasts in the context that is required for the t-test. this is different
+    # for cumulative forecasts (eg, multiple time-horizons) and static file-based forecasts.
+    target_event_rate_forecast1, n_fore1 = gridded_forecast1.target_event_rates(observed_catalog, scale=scale)
+    target_event_rate_forecast2, n_fore2 = gridded_forecast2.target_event_rates(observed_catalog, scale=scale)
+
+    # call the primative version operating on ndarray
+    out = _t_test_ndarray(target_event_rate_forecast1, target_event_rate_forecast2, observed_catalog.event_count, n_fore1, n_fore2,
+                          alpha=alpha)
+
+    # prepare evaluation result object
+    result = EvaluationResult()
+    result.name = 'Paired T-Test'
+    result.test_distribution = (out['ig_lower'], out['ig_upper'])
+    result.observed_statistic = out['information_gain']
+    result.quantile = (out['t_statistic'], out['t_critical'])
+    result.sim_name = (gridded_forecast1.name, gridded_forecast2.name)
+    result.obs_name = observed_catalog.name
+    result.status = 'normal'
+    result.min_mw = numpy.min(gridded_forecast1.magnitudes)
+
+
+

Notice that the function expects two forecast objects and one catalog object. The paired_t_test function calls a +method on the forecast objects named target_event_rates +that returns a tuple (numpy.ndarray, float) consisting of the target event rates and the expected number of events +from the forecast.

+
+

Note

+

The target event rate is the expected rate for an observed event in the observed catalog assuming that +the forecast is true. For a simple example, if we forecast a rate of 0.3 events per year in some bin of a forecast, +each event that occurs within that bin has a target event rate of 0.3 events per year. The expected number of events +in the forecast can be determined by summing over all bins in the gridded forecast.

+
+

We can also see that the paired_t_test function uses the gridded_forecast1.name and calls the numpy.min() +on the gridded_forecast1.magnitudes. Using this information, we can create a mock-class that implements these methods +that can be used by this function.

+
+

Warning

+

If you are creating mock-classes to use with evaluation functions, make sure that you visit the corresponding +documentation and source-code to make sure that your methods return values that are expected by the function. In +this case, it expects the tuple (target_event_rates, expected_forecast_count). This will not always be the case. +If you need help, please create an issue on the GitHub page.

+
+

Here we show an implementation of a mock forecast class that can work with the +paired_t_test function.

+
class MockForecast:
+
+    def __init__(self, data=None, name='my-mock-forecast', magnitudes=(4.95)):
+
+        # data is not necessary, but might be helpful for implementing target_event_rates(...)
+        self.data = data
+        self.name = name
+        # this should be an array or list. it can be as simple as the default argument.
+        self.magnitudes = magnitudes
+
+    def target_event_rates(catalog, scale=None):
+        """ Notice we added the dummy argument scale. This function stub should match what is called paired_t_test """
+
+        # whatever custom logic you need to return these target event rates given your catalog can go here
+        # of course, this should work with whatever catalog you decide to pass into this function
+
+        # this returns the tuple that paired_t_test expects
+        return (ndarray_of_target_event_rates, expected_number_of_events)
+
+
+

You’ll notice that paired_t_test expects a catalog class. Looking back +at the function definition we can see that it needs observed_catalog.event_count and observed_catalog.name. Therefore +the mock class for the catalog would look something like this

+
class MockCatalog:
+
+    def __init__(self, event_count, data=None, name='my-mock-catalog'):
+
+        # this is not necessary, but adding data might be helpful for implementing the
+        # logic needed for the target_event_rates(...) function in the MockForecast class.
+        self.data = data
+        self.name = name
+        self.event_count = event_count
+
+
+

Now using these two objects you can call the paired_t_test directly +without having to modify any of the source code.

+
# create your forecasts
+mock_forecast_1 = MockForecast(some_forecast_data1)
+mock_forecast_2 = MockForecast(some_forecast_data2)
+
+# lets assume that catalog_data is an array that contains the catalog data
+catalog = MockCatalog(len(catalog_data))
+
+# call the function using your classes
+eval_result = paired_t_test(mock_forecast_1, mock_forecast_2, catalog)
+
+
+

The only requirement for this approach is that you implement the methods on the class that the calling function expects. +You can add anything else that you need in order to make those functions work properly. This example is about +as simple as it gets.

+
+

Note

+

If you want to use mock-forecasts and mock-catalogs for other evaluations. You can just add the additional methods +that are needed onto the mock classes you have already built.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/concepts/forecasts.html b/concepts/forecasts.html new file mode 100644 index 00000000..18d24a64 --- /dev/null +++ b/concepts/forecasts.html @@ -0,0 +1,408 @@ + + + + + + + + + Forecasts — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Forecasts

+

pyCSEP supports two types of earthquake forecasts that can be evaluated using the tools provided in this package.

+
    +
  1. Grid-based forecasts

  2. +
  3. Catalog-based forecasts

  4. +
+

These forecast types and the pyCSEP objects used to represent them will be explained in detail in this document.

+ +
+

Gridded forecasts

+

Grid-based forecasts assume that earthquakes occur in independent and discrete space-time-magnitude bins. The occurrence +of these earthquakes are described only by their expected rates. This forecast format provides a general representation +of seismicity that can accommodate forecasts without explicit likelihood functions, such as those created using smoothed +seismicity models. Gridded forecasts can also be produced using simulation-based approaches like +epidemic-type aftershock sequence models.

+

Currently, pyCSEP offers support for two types of grid-baesd forecasts, i.e. conventional gridded forecasts and quadtree-based gridded forecasts. +Conventional grid-based forecasts define their spatial component using a 2D Cartesian (rectangular) grid, and +their magnitude bins using a 1D Cartesian (rectangular) grid. The last bin (largest magnitude) bin is assumed to +continue until infinity. Forecasts use latitude and longitude to define the bin edge of the spatial grid. Typical values +for the are 0.1° x 0.1° (lat x lon) and 0.1 ΔMw units. These choices are not strictly enforced and can defined +according the specifications of an experiment.

+

pyCSEP aso offers support to handle forecast using quadtree approach. Single or multi-resolution spatial grid can be generated based on the choice of modelers. +Then that grid can used for generating earthquake forecast.

+
+

Working with conventional gridded forecasts

+

PyCSEP provides the GriddedForecast class to handle working with +grid-based forecasts. Please see visit this example for an end-to-end tutorial on +how to evaluate a grid-based earthquake forecast.

+ + + + + + +

csep.core.forecasts.GriddedForecast([...])

Class to represent grid-based forecasts

+
+

Default file format

+

The default file format of a gridded-forecast is a tab delimited ASCII file with the following columns +(names are not included):

+
LON_0       LON_1   LAT_0   LAT_1   DEPTH_0 DEPTH_1 MAG_0   MAG_1   RATE                                    FLAG
+-125.4      -125.3  40.1    40.2    0.0     30.0    4.95    5.05    5.8499099999999998e-04  1
+
+
+

Each row represents a single space-magnitude bin and the entire forecast file contains the rate for a specified +time-horizon. An example of a gridded forecast for the RELM testing region can be found +here.

+

The coordinates (LON, LAT, DEPTH, MAG) describe the independent space-magnitude region of the forecast. The lower +coordinates are inclusive and the upper coordinates are exclusive. Rates are incremental within the magnitude range +defined by [MAG_0, MAG_1). The FLAG is a legacy value from CSEP testing centers that indicates whether a spatial cell should +be considered by the forecast. Currently, the implementation does not allow for individual space-magnitude cells to be +flagged. Thus, if a spatial cell is flagged then all corresponding magnitude cells are flagged.

+
+

Note

+

PyCSEP only supports regions that have a thickness of one layer. In the future, we plan to support more complex regions +including those that are defined using multiple depth regions. Multiple depth layers can be collapsed into a single +layer by summing. This operations does reduce the resolution of the forecast.

+
+
+
+

Custom file format

+

The GriddedForecast.from_custom method allows you to provide +a function that can read custom formats. This can be helpful, because writing this function might be required to convert +the forecast into the appropriate format in the first place. This function has no requirements except that it returns the +expected data.

+
+
+classmethod GriddedForecast.from_custom(func, func_args=(), **kwargs)[source]
+

Creates MarkedGriddedDataSet class from custom parsing function.

+
+
Parameters:
+
    +
  • func (callable) – function will be called as func(*func_args).

  • +
  • func_args (tuple) – arguments to pass to func

  • +
  • **kwargs – keyword arguments to pass to the GriddedForecast class constructor.

  • +
+
+
Returns:
+

forecast object

+
+
Return type:
+

csep.core.forecasts.GriddedForecast

+
+
+
+

Note

+

The loader function func needs to return a tuple that contains (data, region, magnitudes). data is a +numpy.ndarray, region is a CartesianGrid2D, and +magnitudes are a numpy.ndarray consisting of the magnitude bin edges. See the function +load_ascii for an example.

+
+
+ +
+
+
+

Working with quadtree-gridded forecasts

+

The same forecast GriddedForecast class also handles forecasts with +quadtree grids. Please see visit this example for an end-to-end tutorial on +how to evaluate a grid-based earthquake forecast.

+ + + + + + +

csep.core.forecasts.GriddedForecast([...])

Class to represent grid-based forecasts

+
+

Default file format

+

The default file format of a quadtree gridded-forecast is also a tab delimited ASCII file with the following columns. Just one additional column is added to the file format, i.e. quadkey to identify the spatial cells. +If quadkeys for each spatial cell are known, it is enough to compute lon/lat bounds. However, lon/lat bounds are still kept in the default format to make it look consistent with conventional forecast format.

+

(names are not included):

+
QUADKEY     LON_0   LON_1   LAT_0   LAT_1   DEPTH_0 DEPTH_1 MAG_0   MAG_1   RATE                                    FLAG
+'01001'                     -125.4  -125.3  40.1    40.2    0.0     30.0    4.95    5.05    5.8499099999999998e-04  1
+
+
+

Each row represents a single space-magnitude bin and the entire forecast file contains the rate for a specified +time-horizon.

+

The coordinates (LON, LAT, DEPTH, MAG) describe the independent space-magnitude region of the forecast. The lower +coordinates are inclusive and the upper coordinates are exclusive. Rates are incremental within the magnitude range +defined by [MAG_0, MAG_1). The FLAG is a legacy value from CSEP testing centers that indicates whether a spatial cell should +be considered by the forecast. Please note that flagged functionality is not yet included for quadtree-gridded forecasts.

+

PyCSEP offers the load_quadtree_forecast function to read quadtree forecast in default format. +Similarly, custom forecast can be defined and read into pyCSEP as explained for conventional gridded forecast.

+
+
+
+
+

Catalog-based forecasts

+

Catalog-based earthquake forecasts are issued as collections of synthetic earthquake catalogs. Every synthetic catalog +represents a realization of the forecast that is representative the uncertainty present in the model that generated +the forecast. Unlike grid-based forecasts, catalog-based forecasts retain the space-magnitude dependency of the events +they are trying to model. A grid-based forecast can be easily computed from a catalog-based forecast by assuming a +space-magnitude region and counting events within each bin from each catalog in the forecast. There can be issues with +under sampling, especially for larger magnitude events.

+
+

Working with catalog-based forecasts

+ + + + + + +

csep.core.forecasts.CatalogForecast([...])

Catalog based forecast defined as a family of stochastic event sets.

+

Please see visit this example for an end-to-end tutorial on how to evaluate a catalog-based +earthquake forecast. An example of a catalog-based forecast stored in the default pyCSEP format can be found +here.

+

The standard format for catalog-based forecasts a comma separated value ASCII format. This format was chosen to be +human-readable and easy to implement in all programming languages. Information about the format is shown below.

+
+

Note

+

Custom formats can be supported by writing a custom function or sub-classing the +AbstractBaseCatalog.

+
+

The event format matches the follow specfication:

+
LON, LAT, MAG, ORIGIN_TIME, DEPTH, CATALOG_ID, EVENT_ID
+-125.4, 40.1, 3.96, 1992-01-05T0:40:3.1, 8, 0, 0
+
+
+

Each row in the catalog corresponds to an event. The catalogs are expected to be placed into the same file and are +differentiated through their catalog_id. Catalogs with no events can be handled in a couple different ways intended to +save storage.

+

The events within a catalog should be sorted in time, and the catalog_id should be increasing sequentially. Breaks in +the catalog_id are interpreted as missing catalogs.

+

The following two examples show how you represent a forecast with 5 catalogs each containing zero events.

+

1. Including all events (verbose)

+
LON, LAT, MAG, ORIGIN_TIME, DEPTH, CATALOG_ID, EVENT_ID
+,,,,,0,
+,,,,,1,
+,,,,,2,
+,,,,,3,
+,,,,,4,
+
+
+

2. Short-hand

+
LON, LAT, MAG, ORIGIN_TIME, DEPTH, CATALOG_ID, EVENT_ID
+,,,,,4,
+
+
+

The following three example show how you could represent a forecast with 5 catalogs. Four of the catalogs contain zero events +and one catalog contains one event.

+

3. Including all events (verbose)

+
LON, LAT, MAG, ORIGIN_TIME, DEPTH, CATALOG_ID, EVENT_ID
+,,,,,0,
+,,,,,1,
+,,,,,2,
+,,,,,3,
+-125.4, 40.1, 3.96, 1992-01-05T0:40:3.1, 8, 4, 0
+
+
+

4. Short-hand

+
LON, LAT, MAG, ORIGIN_TIME, DEPTH, CATALOG_ID, EVENT_ID
+-125.4, 40.1, 3.96, 1992-01-05T0:40:3.1, 8, 4, 0
+
+
+

The simplest way to orient the file follow (3) in the case where some catalogs contain zero events. The zero oriented +catalog_id should be assigned to correspond with the total number of catalogs in the forecast. In the case where every catalog +contains zero forecasted events, you would specify the forecasting using (2). The catalog_id should be assigned to +correspond with the total number of catalogs in the forecast.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/concepts/plots.html b/concepts/plots.html new file mode 100644 index 00000000..5c1bd057 --- /dev/null +++ b/concepts/plots.html @@ -0,0 +1,190 @@ + + + + + + + + + Plots — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Plots

+

PyCSEP provides several functions to produce commonly used plots, such as an earthquake forecast or the evaluation catalog +or perhaps a combination of the two.

+ +
+

Introduction

+
+
+

Plot arguments

+
+
+

Available plots

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/concepts/regions.html b/concepts/regions.html new file mode 100644 index 00000000..4792fa44 --- /dev/null +++ b/concepts/regions.html @@ -0,0 +1,477 @@ + + + + + + + + + Regions — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Regions

+

PyCSEP includes commonly used CSEP testing regions and classes that facilitate working with gridded data sets. This +module is early in development and will be a focus of future development.

+ +

Practically speaking, earthquake forecasts, especially time-dependent forecasts, treat time differently than space and +magnitude. If we consider a family of monthly forecasts for the state of California for earthquakes with M 3.95+, +each of these forecasts would use the same space-magnitude region, even though the time periods are +different. Because the time horizon is an implicit property of the forecast, we do not explicitly consider time in the region +objects provided by pyCSEP. This module contains tools for working with gridded regions in both space and magnitude.

+

First, we will describe how the spatial regions are handled. Followed by magnitude regions, and how these two aspects +interact with one another.

+

Currently, pyCSEP provides two different kinds of spatial gridding approaches to handle binning catalogs and defining regions +for earthquake forecasting evaluations, i.e. CartesianGrid2D and QuadtreeGrid2D. +The fruther details about spatial grids are given below.

+
+

Cartesian grid

+

This section contains information about using 2D cartesian grids.

+ + + + + + +

CartesianGrid2D(polygons, dh[, name, mask])

Represents a 2D cartesian gridded region.

+
+

Note

+

We are planning to do some improvements to this module and to expand its capabilities. For example, we would like to +handle non-regular grids such as a quad-tree. Also, a single Polygon should be able to act as the spatial component +of the region. These additions will make this toolkit more useful for crafting bespoke experiments and for general +catalog analysis. Feature requests are always welcome!

+
+

The CartesianGrid2D acts as a data structure that can associate a spatial +location (eg., lon and lat) with its corresponding spatial bin. This class is optimized to work with regular grids, +although they do not have to be complete (they can have holes) and they do not have to be rectangular (each row / column +can have a different starting coordinate).

+

The CartesianGrid2D maintains a list of +Polygon objects that represent the individual spatial bins from the overall +region. The origin of each polygon is considered to be the lower-left corner (the minimum latitude and minimum longitude).

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

CartesianGrid2D.num_nodes

Number of polygons in region

CartesianGrid2D.get_index_of(lons, lats)

Returns the index of lons, lats in self.polygons

CartesianGrid2D.get_location_of(indices)

Returns the polygon associated with the index idx.

CartesianGrid2D.get_masked(lons, lats)

Returns bool array lons and lats are not included in the spatial region.

CartesianGrid2D.get_cartesian(data)

Returns 2d ndrray representation of the data set, corresponding to the bounding box.

CartesianGrid2D.get_bbox()

Returns rectangular bounding box around region.

CartesianGrid2D.midpoints()

Returns midpoints of rectangular polygons in region

CartesianGrid2D.origins()

Returns origins of rectangular polygons in region

CartesianGrid2D.from_origins(origins[, dh, ...])

Creates instance of class from 2d numpy.array of lon/lat origins.

+
+

Creating spatial regions

+

Here, we describe how the class works starting with the class constructors.

+
@classmethod
+def from_origins(cls, origins, dh=None, magnitudes=None, name=None):
+    """ Convenience function to create CartesianGrid2D from list of polygon origins """
+
+
+

For most applications, using the from_origins function will be +the easiest way to create a new spatial region. The method accepts a 2D numpy.ndarray containing the x (lon) and y (lat) +origins of the spatial bin polygons. These should be the complete set of origins. The function will attempt to compute the +grid spacing by comparing the x and y values between adjacent origins. If this does not seem like a reliable approach +for your region, you can explicitly provide the grid spacing (dh) to this method.

+

When a CartesianGrid2D is created the following steps occur:

+
+
    +
  1. Compute the bounding box containing all polygons (2D array)

  2. +
  3. Create a map between the index of the 2D bounding box and the list of polygons of the region.

  4. +
  5. Store a boolean flag indicating whether a given cell in the 2D array is valid or not

  6. +
+
+

Once these mapping have been created, we can now associate an arbitrary (lon, lat) point with a spatial cell using the +mapping defined in (2). The get_index_of accepts a list +of longitudes and latitudes and returns the index of the polygon they are associated with. For instance, this index can +now be used to access a data value stored in another data structure.

+
+
+

Testing Regions

+

CSEP has defined testing regions that can be used for earthquake forecasting experiments. The following functions in the +csep.core.regions module returns a CartesianGrid2D consistent with +these regions.

+ + + + + + + + + + + + +

california_relm_region([dh_scale, ...])

Returns class representing California testing region.

italy_csep_region([dh_scale, magnitudes, ...])

Returns class representing Italian testing region.

global_region([dh, name, magnitudes])

Creates a global region used for evaluating gridded forecasts on the global scale.

+
+
+

Region Utilities

+

PyCSEP also provides some utilities that can facilitate working with regions. As we expand this module, we will include +functions to accommodate different use-cases.

+ + + + + + + + + + + + + + + + + + + + + +

magnitude_bins(start_magnitude, ...)

Returns array holding magnitude bin edges.

create_space_magnitude_region(region, magnitudes)

Simple wrapper to create space-magnitude region

parse_csep_template(xml_filename)

Reads CSEP XML template file and returns the lat/lon values for the forecast.

increase_grid_resolution(points, dh, factor)

Takes a set of origin points and returns a new set with higher grid resolution.

masked_region(region, polygon)

Build a new region based off the coordinates in the polygon.

generate_aftershock_region(mainshock_mw, ...)

Creates a spatial region around a given epicenter

+
+
+
+

Quadtree grid

+

We want to use gridded regions with less spatial cells and multi-resolutions grids for creating earthquake forecast models. +We also want to test forecast models on different resolutions. But before we can do this, we need to have the capability to acquire such grids. +There can be different possible options for creating multi-resolutions grids, such as voronoi cells or coarse grids, etc. +The gridding approach needs to have certain properties before we choose it for CSEP experiments. We want an approach for gridding that is simple to implement, easy to understand and should come with intutive indexing. Most importantly, it should come with a coordinated mechanism for changing between different resolutions. It means that one can not simply choose to combine cells of its own choice and create a larger grid cell (low-resolution) and vice versa. This can potentially make the grid comparision process difficult. There must be a specific well-defined strategy to change between different resolutions of grids. We explored different gridding approaches and found quadtree to be a better solution for this task, despite a few drawbacks, such as quadtree does not work for global region beyond 85.05 degrees North and South.

+

The quadtree is a hierarchical tiling strategy for storing and indexing geospatial data. In start the global testing region is divided into 4 tiles, identified as ‘0’, ‘1’, ‘2’, ‘3’. +Then each tile can be divided into four children tiles, until a final desired grid is acquired. +Each tile is identified by a unique identifier called quadkey, which are ‘0’, ‘1’, ‘2’ or ‘3’. When a tile is divided further, the quadkey is also modified by appending the new identifier with the previous quadkey. +Once a grid is acquired then we call each tile as a grid cell. +The number of times a tile is divided is reffered as the zoom-level (L) and the length of quadkey denotes the number of times a tile has been divided. +If a grid has same zoom-level for each tile, then it is referred as single-resolution grid.

+

A single-resolution grid is acquired at zoom-level L=5, provides 1024 spatial cells in whole globe. Increase in the value of L by one step leads to the increase in the number of grid cells by four times. +Similary at L=11, the number of cells acquired in grid are 4.2 million approximately.

+

We can use quadtree in combination with any data to create a multi-resolution grid, in which the resolution is determined by the input data. In general quadtree can be used in combination with any type of input data. However, now we provide support of earthquake catalog to be used as input data for determining the grid-resolution. With time we intend to incorporate the support for other types of datasets, such as such as distance form the mainshock or rupture plance, etc.

+

Currently, for generating mult-resolution grids, we can choose two criteria to decide resolution, i.e. maximum number of earthquakes allowed per cell (Nmax) and maximum zoom-level (L) allowed for a cell. +This means that only those cells (tiles) will be divided further into sub-cells that contain more earthquakes than Nmax and the cells will not be divided further after reaching L, even if number of earthquakes are more than Nmax. Thus, quadtree can provide high-resolution (smaller) grid cells in seismically active regions and a low-resolution (bigger) grid cells in seismically quiet regions. +It offers earthquake forecast modellers the liberty of choosing a suitable spatial grid based on their choice.

+

This section contains information about using quadtree based grid.

+ + + + + + +

QuadtreeGrid2D(polygons, quadkeys, bounds[, ...])

Respresents a 2D quadtree gridded region.

+

The QuadtreeGrid2D acts as a data structure that can associate a spatial +location, identified by a quadkey (or lon and lat) with its corresponding spatial bin. This class allows to create a quadtree grid using three different methods, based on the choice of user. +It also offers the conversion from quadtree cell to lon/lat bouds.

+

The QuadtreeGrid2D maintains a list of +Polygon objects that represent the individual spatial bins from the overall +region. The origin of each polygon is considered to be the lower-left corner (the minimum latitude and minimum longitude).

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

QuadtreeGrid2D.num_nodes

Number of polygons in region

QuadtreeGrid2D.get_cell_area()

Calls function geographical_area_from_bounds and computes area of each grid cell.

QuadtreeGrid2D.get_index_of(lons, lats)

Returns the index of lons, lats in self.polygons

QuadtreeGrid2D.get_location_of(indices)

Returns the polygon associated with the index idx.

QuadtreeGrid2D.get_bbox()

Returns rectangular bounding box around region.

QuadtreeGrid2D.midpoints()

Returns midpoints of rectangular polygons in region

QuadtreeGrid2D.origins()

Returns origins of rectangular polygons in region

QuadtreeGrid2D.save_quadtree(filename)

Saves the quadtree grid (quadkeys) in a text file

QuadtreeGrid2D.from_catalog(catalog, threshold)

Creates instance of class from 2d numpy.array of lon/lat of Catalog.

QuadtreeGrid2D.from_single_resolution(zoom)

Creates instance of class at single-resolution using provided zoom-level.

QuadtreeGrid2D.from_quadkeys(quadk[, ...])

Creates instance of class from available quadtree grid.

+
+

Creating spatial regions

+

Here, we describe how the class works starting with the class constructors and how users can create different types regions.

+
+

Multi-resolution grid based on earthquake catalog

+

Read a global earthquake catalog in CSEPCatalog format and use it to generate a multi-resolution quadtree-based grid constructors.

+
@classmethod
+def from_catalog(cls, catalog, threshold, zoom=11, magnitudes=None, name=None):
+    """ Convenience function to create a multi-resolution grid using earthquake catalog """
+
+
+
+
+

Single-resolution grid

+

Generate a single-resolution grid at the same zoom-level everywhere. This grid does not require a catalog. It only needs the zoom-level to determine the resolution of grid.:

+
@classmethod
+def from_single_resolution(cls, zoom, magnitudes=None, name=None):
+    """ Convenience function to create a single-resolution grid """
+
+
+
+
+

Grid loading from already created quadkeys

+

An already saved quadtree grid can also be loaded in the pyCSEP. Read the quadkeys and use the following function to instantiate the class.

+
@classmethod
+def from_quadkeys(cls, quadk, magnitudes=None, name=None):
+    """ Convenience function to create a grid using already generated quadkeys """
+
+
+

When a QuadtreeGrid2D is created the following steps occur:

+
+
    +
  1. Compute the bounding box containing all polygons (2D array) corresponding to quadkeys

  2. +
  3. Create a map between the index of the 2D bounding box and the list of polygons of the region.

  4. +
+
+

Once these mapping have been created, we can now associate an arbitrary (lon, lat) point with a spatial cell using the +mapping defined in (2). The get_index_of accepts a list +of longitudes and latitudes and returns the index of the polygon they are associated with. For instance, this index can +now be used to access a data value stored in another data structure.

+
+
+
+

Testing Regions

+

CSEP has defined testing regions that can be used for earthquake forecasting experiments. The above mentioned functions are used to create quadtree grids for global testing region. +Once a grid +However, a quadtree gridded region can be acquired for any geographical area and used for forecast generation and testing. For example, we have created a quadtree-gridded region at fixed zoom-level of 12 for California RELM testing region.

+ + + + + + +

california_quadtree_region([magnitudes, name])

Returns object of QuadtreeGrid2D representing quadtree grid for California RELM testing region.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/genindex.html b/genindex.html new file mode 100644 index 00000000..d26b452a --- /dev/null +++ b/genindex.html @@ -0,0 +1,763 @@ + + + + + + + + Index — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + +

Index

+ +
+ _ + | A + | B + | C + | D + | E + | F + | G + | I + | L + | M + | N + | P + | Q + | S + | T + | U + | W + +
+

_

+ + +
+ +

A

+ + + +
+ +

B

+ + + +
+ +

C

+ + + +
+ +

D

+ + + +
+ +

E

+ + + +
+ +

F

+ + + +
+ +

G

+ + + +
+ +

I

+ + + +
+ +

L

+ + + +
+ +

M

+ + + +
+ +

N

+ + + +
+ +

P

+ + + +
+ +

Q

+ + + +
+ +

S

+ + + +
+ +

T

+ + + +
+ +

U

+ + + +
+ +

W

+ + + +
+ + + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/getting_started/core_concepts.html b/getting_started/core_concepts.html new file mode 100644 index 00000000..04ff7a1d --- /dev/null +++ b/getting_started/core_concepts.html @@ -0,0 +1,266 @@ + + + + + + + + + Core Concepts for Beginners — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Core Concepts for Beginners

+

If you are reading this documentation, there is a good chance that you are developing/evaluating an earthquake forecast or +implementing an experiment at a CSEP testing center. This section will help you understand how we conceptualize forecasts, +evaluations, and earthquake catalogs. These components make up the majority of the PyCSEP package. We also include some +prewritten visualizations along with some utilities that might be useful in your work.

+
+

Catalogs

+

Earthquake catalogs are fundamental to both forecasts and evaluations and make up a core component of the PyCSEP package. +At some point you will be working with catalogs if you are evaluating earthquake forecasts.

+

One major difference between PyCSEP and a project like ObsPy is that typical ‘CSEP’ calculations +operate on an entire catalog at once to perform methods like filtering and binning that are required to evaluate an earthquake +forecast. We provide earthquake catalog classes that follow the interface defined by +AbstractBaseCatalog.

+

The catalog data are stored internally as a structured Numpy array +which effectively treats events contiguously in memory like a c-style struct. This allows us to accelerate calculations +using the vectorized operations provided by Numpy. The necessary attributes for an event to be used +in an evaluation are the spatial location (lat, lon), magnitude, and origin time. Additionally, depth and other identifying +characteristics can be used. The default storage format for +an earthquake catalog is an ASCII/utf-8 text file with events stored in CSV format.

+

The AbstractBaseCatalog can be extended to accommodate different catalog formats +or input and output routines. For example UCERF3Catalog extends this class to deal +with the big-endian storage routine from the UCERF3-ETAS forecasting model. More +information will be included in the Catalogs section of the documentation.

+
+
+

Forecasts

+

PyCSEP provides objects for interacting with earthquake forecasts. PyCSEP supports two types +of earthquake forecasts, and provides separate objects for interacting with both. The forecasts share similar +characteristics, but, conceptually, they should be treated differently because they require different types of evaluations.

+

Both time-independent and time-dependent forecasts are represented using the same PyCSEP forecast objects. Typically, for +time-dependent forecasts, one would create separate forecast objects for each time period. As the name suggests, +time-independent forecasts do not change with time.

+
+

Grid-based forecast

+

Grid-based earthquake forecasts are specified by the expected rate of earthquakes within discrete, independent +space-time-magnitude bins. Within each bin, the expected rate represents the parameter of a Poisson distribution. For, details +about the forecast objects visit the Forecasts section of the documentation.

+

The forecast object contains three main components: (1) the expected earthquake rates, (2) the +spatial region associated with the rates, and (3) the magnitude range +associated with the expected rates. The spatial bins are usually discretized according to the geographical coordinates +latitude and longitude with most previous CSEP spatial regions defining a spatial size of 0.1° x 0.1°. Magnitude bins are +also discretized similarly, with 0.1 magnitude units being a standard choice. PyCSEP does not enforce constraints on the +bin-sizes for both space and magnitude, but the discretion must be regular.

+
+
+

Catalog-based forecast

+

Catalog-based forecasts are specified by families of synthetic earthquake catalogs that are generated through simulation +by probabilistic models. Each catalog represents a stochastic representation of seismicity consistent with the forecasting +model. Probabilistic statements are made by computing statistics (usually by counting) within the family of synthetic catalogs, +which can be as simple as counted the number of events in each catalog. These statistics represent the full-distribution of outcomes as +specified by the forecasting models, thereby allowing for more direct assessments of the models that produce them.

+

Within PyCSEP catalog forecasts are effectively lists of earthquake catalogs, no different than those obtained from +authoritative sources. Thus, any operation that can be performed on an observed earthquake catalog can be performed on a +synthetic catalog from a catalog-based forecast.

+

It can be useful to count the numbers of forecasted earthquakes within discrete space-time bins (like those used for +grid-based forecasts). Therefore, it’s common to have a spatial region and +set of magnitude bins associated with a forecast. Again, the only rules that PyCSEP enforces are that the space-magnitude +regions are regularly discretized.

+
+
+
+

Evaluations

+

PyCSEP provides implementations of statistical tests used to evaluate both grid-based and catalog-based earthquake forecasts. +The former use parametric evaluations based on Poisson likelihood functions, while the latter use so-called ‘likelihood-free’ +evaluations that are computed from empirical distributions provided by the forecasts. Details on the specific implementation +of the evaluations will be provided in the Evaluations section.

+

Every evaluation can be different, but in general, the evaluations need the following information:

+
    +
  1. Earthquake forecast(s)

    +
    +
      +
    • Spatial region

    • +
    • Magnitude range

    • +
    +
    +
  2. +
  3. Authoritative earthquake catalog

  4. +
+

PyCSEP does not produce earthquake forecasts, but provides the ability to represent them using internal data models to +facilitate their evaluation. General advice on how to administer the statistical tests will be provided in the +Evaluations section.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/getting_started/installing.html b/getting_started/installing.html new file mode 100644 index 00000000..aeec0261 --- /dev/null +++ b/getting_started/installing.html @@ -0,0 +1,291 @@ + + + + + + + + + Installing pyCSEP — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Installing pyCSEP

+

We are working on a conda-forge recipe and PyPI distribution. +If you plan on contributing to this package, visit the +contribution guidelines for installation instructions.

+
+

Note

+

This package requires >=Python 3.9.

+
+

The easiest way to install PyCSEP is using conda. It can also be installed using pip or built from source.

+
+

Using Conda

+

For most users, you can use

+
conda install --channel conda-forge pycsep
+
+
+
+
+

Using Pip

+

Before this installation will work, you must first install the following system dependencies. The remaining dependencies +should be installed by the installation script. To help manage dependency issues, we recommend using virtual environments +like virtualenv.

+
+
Python 3.9 or later (https://python.org)
+

+
NumPy 1.21.3 or later (https://numpy.org)
+
+
Python package for scientific computing and numerical calculations.
+

+
+
SciPy 1.7.1 or later (https://scipy.org)
+
+
Python package that extends NumPy tools.
+

+
+
Pandas 1.3.4 or later (https://pandas.pydata.org)
+
+
Python package for data analysis and manipulation.
+

+
+
Cartopy 0.22.0 or later (https://scitools.org.uk/cartopy/)
+
+
Python package for geospatial data processing.
+
+
+

Example for Ubuntu and MacOS:

+
git clone https://github.com/sceccode/pycsep
+pip install --upgrade pip
+pip install -e .
+
+
+
+
+

Installing from Source

+

Use this approach if you want the most up-to-date code. This creates an editable installation that can be synced with +the latest GitHub commit.

+

We recommend using virtual environments when installing python packages from source to avoid any dependency conflicts. We prefer +conda as the package manager over pip, because conda does a good job of handling binary distributions of packages +across multiple platforms. Also, we recommend using the miniconda or the miniforge (which uses mamba for a faster dependency handling) installers, because it is lightweight and only includes +necessary pacakages like pip and zlib.

+
+

Using Conda

+

If you don’t have conda on your machine, download and install Miniconda or Miniforge

+
git clone https://github.com/SCECcode/pycsep
+cd pycsep
+conda env create -f requirements.yml
+conda activate csep-dev
+# Installs in editor mode with all dependencies
+pip install -e .
+
+
+

Note: If you want to go back to your default environment use the command conda deactivate.

+
+
+

Using Pip / Virtualenv

+

We highly recommend using Conda, because this tools helps to manage binary dependencies on Python packages. If you +must use Virtualenv +follow these instructions:

+
git clone https://github.com/SCECcode/pycsep
+cd pycsep
+python -m virtualenv venv
+source venv/bin/activate
+# Installs in editor mode dependencies are installed by conda
+pip install -e .[all]
+
+
+

Note: If you want to go back to your default environment use the command deactivate.

+
+
+
+

Developers Installation

+

This shows you how to install a copy of the repository that you can use to create Pull Requests and sync with the upstream +repository. First, fork the repo on GitHub. It will now live at https://github.com/<YOUR_GITHUB_USERNAME>/pycsep. +We recommend using conda to install the development environment.

+
git clone https://github.com/<YOUR_GITHUB_USERNAME>/pycsep.git
+cd pycsep
+conda env create -f requirements.yml
+conda activate csep-dev
+pip install -e .[all]
+# Allow sync with default repository
+git remote add upstream https://github.com/SCECCode/pycsep.git
+
+
+

This ensures to have a clean installation of pyCSEP and the required developer dependencies (e.g., pytest, sphinx). +Now you can pull from upstream using git pull upstream master to keep your copy of the repository in sync with the +latest commits.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/getting_started/theory.html b/getting_started/theory.html new file mode 100644 index 00000000..b5d9c979 --- /dev/null +++ b/getting_started/theory.html @@ -0,0 +1,1110 @@ + + + + + + + + + Theory of CSEP Tests — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Theory of CSEP Tests

+

This page describes the theory of each of the forecast tests +included in pyCSEP along with working code examples. You will find +information on the goals of each test, the theory behind the tests, how +the tests are applied in practice, and how forecasts are ‘scored’ given +the test results. Also, we include the code required to run in each test +and a description of how to interpret the test results.

+
import csep
+from csep.core import (
+    regions,
+    catalog_evaluations,
+    poisson_evaluations as poisson
+)
+from csep.utils import (
+    datasets,
+    time_utils,
+    comcat,
+    plots,
+    readers
+)
+
+# Filters matplotlib warnings
+import warnings
+warnings.filterwarnings('ignore')
+
+
+
+

Grid-based Forecast Tests

+

These tests are designed for grid-based forecasts (e.g., Schorlemmer et +al., 2007), where expected rates are provided in discrete Poisson +space-magnitude cells covering the region of interest. The region +\(\boldsymbol{R}\) is then the product of the spatial rate +\(\boldsymbol{S}\) and the binned magnitude rate +\(\boldsymbol{M}\),

+
+\[\boldsymbol{R} = \boldsymbol{M} \times \boldsymbol{S}.\]
+

A forecast \(\boldsymbol{\Lambda}\) can be fully specified as the +expected number of events (or rate) in each space-magnitude bin +(\(m_i, s_j\)) covering the region \(\boldsymbol{R}\) and +therefore can be written as

+
+\[\boldsymbol{\Lambda} = \{ \lambda_{m_i, s_j}| m_i \in \boldsymbol{M}, s_j \in \boldsymbol{S} \},\]
+

where \(\lambda_{m_i, s_j}\) is the expected rate of events in +magnitude bin \(m_i\) and spatial bin \(s_j\). The observed +catalogue of events \(\boldsymbol{\Omega}\) we use to evaluate the +forecast is similarly discretised into the same space-magnitude bins, +and can be described as

+
+\[\boldsymbol{\Omega} = \{ \omega_{m_i, s_j}| m_i \in \boldsymbol{M}, s_j \in \boldsymbol{S} \},\]
+

where \(\omega_{m_i, s_j}\) is the observed number of +events in spatial cell \(s_j\) and magnitude bin \(m_i\). The +magnitude bins are specified in the forecast: typically these are in 0.1 +increments and this is the case in the examples we use here. These +examples use the Helmstetter et al (2007) smoothed seismicity forecast +(including aftershocks), testing over a 5 year period between 2010 and +2015.

+
# Set up experiment parameters
+start_date = time_utils.strptime_to_utc_datetime('2010-01-01 00:00:00.0')
+end_date = time_utils.strptime_to_utc_datetime('2015-01-01 00:00:00.0')
+
+# Loads from the PyCSEP package
+helmstetter = csep.load_gridded_forecast(
+    datasets.helmstetter_aftershock_fname,
+    start_date=start_date,
+    end_date=end_date,
+    name='helmstetter_aftershock'
+)
+
+# Set up evaluation catalog
+catalog = csep.query_comcat(helmstetter.start_time, helmstetter.end_time,
+                            min_magnitude=helmstetter.min_magnitude)
+
+# Filter evaluation catalog
+catalog = catalog.filter_spatial(helmstetter.region)
+
+# Add seed for reproducibility in simulations
+seed = 123456
+
+# Number of simulations for Poisson consistency tests
+nsim = 100000
+
+
+
Fetched ComCat catalog in 5.9399449825286865 seconds.
+
+Downloaded catalog from ComCat with following parameters
+Start Date: 2010-01-10 00:27:39.320000+00:00
+End Date: 2014-08-24 10:20:44.070000+00:00
+Min Latitude: 31.9788333 and Max Latitude: 41.1431667
+Min Longitude: -125.3308333 and Max Longitude: -115.0481667
+Min Magnitude: 4.96
+Found 24 events in the ComCat catalog.
+
+
+
+

Consistency tests

+

The consistency tests evaluate the consistency of a forecast against +observed earthquakes. These tests were developed across a range of +experiments and publications (Schorlemmer et al, 2007; Zechar et al +2010; Werner et al, 2011a). The consistency tests are based on the +likelihood of observing the catalogue (actual recorded events) given the +forecast. Since the space-magnitude bins are assumed to be independent, +the joint-likelihood of observing the events in each individual bin +given the specified forecast can be written as

+
+\[Pr(\omega_1 | \lambda_1) Pr(\omega_2 | \lambda_2)...Pr(\omega_n | \lambda_n) = \prod_{m_i , s_j \in \boldsymbol{R}} f_{m_i, s_j}(\omega(m_i, s_j)),\]
+

where \(f_{m_i, s_j}\) specifies the probability distribution in +each space-magnitude bin. We prefer to use the joint log-likelihood in +order to sum log-likelihoods rather than multiply the likelihoods. The +joint log-likelihood can be written as:

+
+\[L(\boldsymbol{\Omega} | \boldsymbol{\Lambda}) = \sum_{m_i , s_j \in \boldsymbol{R}} log(f_{m_i, s_j}(\omega(m_i, s_j)).\]
+

The likelihood of the observations, \(\boldsymbol{\Omega}\), given +the forecast \(\boldsymbol{\Lambda}\) is the sum over all +space-magnitude bins of the log probabilities in individual cells of the +forecast. Grid-based forecasts are specified by the expected number of +events in a discrete space-magnitude bin. From the maximum entropy +principle, we assign a Poisson distribution in each bin. In this case, +the probability of an event occurring is independent of the time since +the last event, and events occur at a rate \(\lambda\). The +Poissonian joint log-likelihood can be written as

+
+\[L(\boldsymbol{\Omega} | \boldsymbol{\Lambda}) = \sum_{m_i , s_j \in \boldsymbol{R}} -\lambda(m_i, s_j) + \omega(m_i, s_j)\log(\lambda(m_i, s_j)) - log(\omega(m_i, s_j)!),\]
+

where \(\lambda(m_i, s_j)\) and \(\omega(m_i, s_j)\) are the +expected counts from the forecast and observed counts in cell +\(m_i, s_j\) respectively. We can calculate the likelihood directly +given the forecast and discretised observations.

+

Forecast uncertainty

+

A simulation based approach is used to account for uncertainty in the +forecast. We simulate realizations of catalogs that are consistent with +the forecast to obtain distributions of scores. In the pyCSEP package, +as in the original CSEP tests, simulation is carried out using the +cumulative probability density of the forecast obtained by ordering the +rates in each bin. We shall call \(F_{m_is_j}\) the cumulative +probability density in cell \((m_i, s_j)\). The simulation approach +then works as follows:

+
    +
  • For each forecast bin, draw a random number \(z\) from a uniform +distribution between 0 and 1

  • +
  • Assign this event to a space-magnitude bin through the inverse +cumulative density distribution at this point +\(F^{-1}_{m_i, s_j}(z)\)

  • +
  • Iterate over all simulated events to generate a catalog containing +\(N_{sim}\) events consistent with the forecast

  • +
+

For each of these tests, we can plot the distribution of likelihoods +computed from theses simulated catalogs relative to the observations +using the plots.plot_poisson_consistency_test function. We also +calculate a quantile score to diagnose a particular forecast with +repsect. The number of simulations can be supplied to the Poisson +consistency test functions using the num_simulations argument: for +best results we suggest 100,000 simulations to ensure convergence.

+

Scoring the tests

+

Through simulation (as described above), we obtain a set of simulated +catalogs \(\{\hat{\boldsymbol{\Omega}}\}\). Each catalogue can be +written as

+
+\[\hat{\boldsymbol{\Omega}}_x =\{ \hat{\lambda}_x(m_i, s_j)|(m_i, s_j) \in \boldsymbol{R}\},\]
+

where \(\hat{\lambda}_x(m_i, s_j)\) is the number of +simulated earthquakes in cell \((m_i, s_j)\) of (simulated) catalog +\(x\) that is consistent with the forecast \(\Lambda\). We then +compute the joint log-likelihood for each simulated catalogue +\(\hat{L}_x = L(\hat{\Omega}_x|\Lambda)\). The joint log-likelihood +for each simulated catalogue given the forecast gives us a set of +log-likelihoods \(\{\hat{\boldsymbol{L}}\}\) that represents the +range of log-likelihoods consistent with the forecast. We then compare +our simulated log-likelihoods with the observed log-likelihood +\(L_{obs} = L(\boldsymbol{\Omega}|\boldsymbol{\Lambda})\) using a +quantile score.

+

The quantile score is defined by the fraction of simulated joint +log-likelihoods less than or equal to the observed likelihood.

+
+\[\gamma = \frac{ |\{ \hat{L}_x | \hat{L}_x \le L_{obs}\} |}{|\{ \hat{\boldsymbol{L}} \}|}\]
+

Whether a forecast can be said to pass an evaluation depends on the +significance level chosen for the testing process. The quantile score +explicitly tells us something about the significance of the result: the +observation is consistent with the forecast with \(100(1-\gamma)\%\) +confidence (Zechar, 2011). Low \(\gamma\) values demonstrate that +the observed likelihood score is less than most of the simulated +catalogs. The consistency tests, excluding the N-test, are considered to +be one-sided tests: values which are too small are ruled inconsistent +with the forecast, but very large values may not necessarily be +inconsistent with the forecast and additional testing should be used to +further clarify this (Schorlemmer et al, 2007).

+

Different CSEP experiments have used different sensitivity values. +Schorlemmer et al (2010b) consider \(\gamma \lt 0.05\) while the +implementation in the Italian CSEP testing experiment uses +\(\gamma\) < 0.01 (Taroni et al, 2018). However, the consistency +tests are most useful as diagnostic tools where the quantile score +assesses the level of consistency between observations and data. +Temporal variations in seismicity make it difficult to formally reject a +model from a consistency test over a single evaluation period.

+
+

Likelihood-test (L-test)

+

Aim: Evaluate the likelihood of observed events given the provided +forecast - this includes the rate, spatial distribution and magnitude +components of the forecast.

+

Method: The L-test is one of the original forecast tests described in +Schorlemmer et al, 2007. The likelihood of the observation given the +model is described by a Poisson likelihood function in each cell and the +total joint likelihood described by the product over all bins, or the +sum of the log-likelihoods (see above, or Zechar 2011 for more details).

+

Note: The likelihood scores are dominated by the rate-component of the +forecast. This causes issues in scoring forecasts where the expected +number of events are different from the observed number of events. We +suggest to use the N-test (below) and CL-test (below) independently to +score the rate component, and spatial-magnitude components of the +forecast. This behavior can be observed by comparing the CL-test and +N-test results with the L-test results in this notebook. Since the +forecast overpredicts the rate of events during this testing period, the +L-test provides a passing score even though the space-magnitude and rate +components perform poorly during this evaluation period.

+

pyCSEP implementation

+

pyCSEP uses the forecast and catalog and returns the test distribution, +observed statistic and quantile score, which can be accessed from the +likelihood_test_result object. We can pass this directly to the +plotting function, specifying that the test should be one-sided.

+
likelihood_test_result = poisson.likelihood_test(
+    helmstetter,
+    catalog,
+    seed=seed,
+    num_simulations=nsim
+)
+ax = plots.plot_poisson_consistency_test(
+    likelihood_test_result,
+    one_sided_lower=True,
+    plot_args={'title': r'$\mathcal{L}-\mathrm{test}$', 'xlabel': 'Log-likelihood'}
+)
+
+
+../_images/output_6_0.png +

pyCSEP plots the resulting \(95\%\) range of likelihoods returned by +the simulation with the black bar by default. The observed likelihood +score is shown by a green square where the forecast passes the test and +a red circle where the observed likelihood is outside the likelihood +distribution.

+
+
+

CL-test

+

Aim: The original likelihood test described above gives a result that +combines the spatial, magnitude and number components of a forecast. The +conditional likelihood or CL-Test was developed to test the spatial and +magnitude performance of a forecast without the influence of the number +of events (Werner et al. 2011a, 2011b). By conditioning the test +distribution on the observed number of events we elimiate the dependency +with the forecasted number of events as described above.

+
+
Method
+
The CL-test is computed in the same way as the L-test, but with the +number of events normalised to the observed catalog \(N_{obs}\) +during the simulation stage. The quantile score is then calculated +similarly such that
+
+
+\[\gamma_{CL} = \frac{ |\{ \hat{CL}_x | \hat{CL}_x \le CL_{obs}\} |}{|\{ \hat{\boldsymbol{CL}} \}|}.\]
+

Implementation in pyCSEP

+
cond_likelihood_test_result = poisson.conditional_likelihood_test(
+    helmstetter,
+    catalog,
+    seed=seed,
+    num_simulations=nsim
+)
+ax = plots.plot_poisson_consistency_test(
+    cond_likelihood_test_result,
+    one_sided_lower=True,
+    plot_args = {'title': r'$CL-\mathrm{test}$', 'xlabel': 'conditional log-likelihood'}
+)
+
+
+../_images/output_9_0.png +

Again, the \(95\%\) confidence range of likelihoods is shown by the +black bar, and the symbol reflects the observed conditional-likelihood +score. In this case, the observed conditional-likelihood is shown with +the red circle, which falls outside the range of likelihoods simulated +from the forecast. To understand why the L- and CL-tests give different +results, consider the results of the N-test and S-test in the following +sections.

+
+
+

N-test

+

Aim: The number or N-test is the most conceptually simple test of a +forecast: To test whether the number of observed events is consistent +with that of the forecast.

+

Method: The originial N-test was introduced by Schorlemmer et al (2007) +and modified by Zechar et al (2010). The observed number of events is +given by,

+
+\[N_{obs} = \sum_{m_i, s_j \in R} \omega(m_i, s_j).\]
+

Using the simulations described above, the expected number of events is +calculated by summing the simulated number of events over all grid cells

+
+\[\hat{N_x} = \sum_{m_i, s_j \in R} \hat{\omega}_x(m_i, s_j),\]
+

where \(\hat{\omega}_x(m_i, s_j)\) is the simulated number of events +in catalog \(x\) in spatial cell \(s_j\) and magnitude cell +\(m_i\), generating a set of simulated rates \(\{ \hat{N} \}\). +We can then calculate the probability of i) observing at most +\(N_{obs}\) events and ii) of observing at least \(N_{obs}\) +events. These probabilities can be written as:

+
+\[\delta_1 = \frac{ |\{ \hat{N_x} | \hat{N_x} \le N_{obs}\} |}{|\{ \hat{N} \}|}\]
+

and

+
+\[\delta_2 = \frac{ |\{ \hat{N_x} | \hat{N_x} \ge N_{obs}\} |}{|\{ \hat{N} \}|}\]
+

If a forecast is Poisson, the expected number of events in the forecast +follows a Poisson distribution with expectation +\(N_{fore} = \sum_{m_i, s_j \in R} \lambda(m_i, s_j)\). The +cumulative distribution is then a Poisson cumulative distribution:

+
+\[F(x|N_{fore}) = \exp(-N_{fore}) \sum^{x}_{i=0} \frac{(N_{fore})^i}{i!}\]
+

which can be used directly without the need for simulations. The N-test +quantile score is then

+
+\[\delta_1 = 1 - F((N_{obs}-1)|N_{fore}),\]
+

and

+
+\[\delta_2 = F(N_{obs}|N_{fore}).\]
+

The original N-test considered only \(\delta_2\) and it’s complement +\(1-\delta_2\), which effectively tested the probability of at most +\(N_{obs}\) events and more than \(N_{obs}\) events. Very small +or very large values (<0.025 or > 0.975 respectively) were considered to +be inconsistent with the forecast in Schorlemmer et al (2010). However +the approach above aims to test something subtely different, that is at +least \(N_{obs}\) events and at most \(N_{obs}\) events. Zechar +et al (2010a) recommends testing both \(\delta_1\) and +\(\delta_2\) with an effective significance of have the required +significance level, so for a required significance level of 0.05, a +forecast is consistent if both \(\delta_1\) and \(\delta_2\) are +greater than 0.025. A very small \(\delta_1\) suggest the rate is +too low while a very low \(\delta_2\) suggests a rate which is too +high to be consistent with observations.

+

Implementation in pyCSEP

+

pyCSEP uses the Zechar et al (2010) version of the N-test and the +cumulative Poisson approach to estimate the range of expected events +from the forecasts, so does not implement a simulation in this case. The +upper and lower bounds for the test are determined from the cumulative +Poisson distribution. number_test_result.quantile will return both +\(\delta_1\) and \(\delta_2\) values.

+
number_test_result = poisson.number_test(helmstetter, catalog)
+ax = plots.plot_poisson_consistency_test(
+    number_test_result,
+    plot_args={'xlabel':'Number of events'}
+)
+
+
+../_images/output_13_0.png +

In this case, the black bar shows the \(95\%\) interval for the +number of events in the forecast. The actual observed number of events +is shown by the green box, which just passes the N-test in this case: +the forecast generallly expects more events than are observed in +practice, but the observed number falls just within the lower limits of +what is expected so the forecast (just!) passes the N-test.

+
+
+

M-test

+

Aim: Establish consistency (or lack thereof) of observed event +magnitudes with forecast magnitudes.

+

Method: The M-test is first described in Zechar et al. (2010) and aims +to isolate the magnitude component of a forecast. To do this, we sum +over the spatial bins and normalise so that the sum of events matches +the observations.

+
+\[\hat{\boldsymbol{\Omega}}^m = \big{\{}\omega^{m}(m_i)| m_i \in \boldsymbol{M}\big{\}},\]
+

where

+
+\[\omega^m(m_i) = \sum_{s_j \in \boldsymbol{S}} \omega(m_i, s_j),\]
+

and

+
+\[\boldsymbol{\Lambda}^m = \big{\{} \lambda^m(m_i)| m_i \in \boldsymbol{M} \big{\}},\]
+

where

+
+\[\lambda^m(m_i) = \frac{N_{obs}}{N_{fore}}\sum_{s_j \in \boldsymbol{S}} \lambda\big{(}m_i, s_j\big{)}.\]
+

Then we compute the joint log-likelihood as we did for the L-test:

+
+\[M = L(\boldsymbol{\Omega}^m | \boldsymbol{\Lambda}^m)\]
+

We then wish to compare this with the distribution of simulated +log-likelihoods, this time keep the number of events fixed to

+

\(N_{obs}\). Then for each simulated catalogue, +\(\hat{M}_x = L(\hat{\boldsymbol{\Omega}}^m | \boldsymbol{\Lambda}^m)\)

+

Quantile score: The final test statistic is again the fraction of +observed log likelihoods within the range of the simulated log +likelihood values:

+
+\[\kappa = \frac{ |\{ \hat{M_x} | \hat{M_x} \le M\} |}{|\{ \hat{M} \}|}\]
+

and the observed magnitudes are inconsistent with the forecast if +\(\kappa\) is less than the significance level.

+

pyCSEP implementation

+
mag_test_result = poisson.magnitude_test(
+    helmstetter,
+    catalog,
+    seed=seed,
+    num_simulations=nsim
+)
+ax = plots.plot_poisson_consistency_test(
+    mag_test_result,
+    one_sided_lower=True,
+    plot_args={'xlabel':'Normalized likelihood'}
+)
+
+
+../_images/output_16_0.png +

In this example, the forecast passes the M-test, demonstrating that the +magnitude distribution in the forecast is consistent with observed +events. This is shown by the green square marking the joint +log-likelihood for the observed events.

+
+
+

S-test

+

Aim: The spatial or S-test aims to establish consistency (or lack +thereof) of observed event locations with a forecast. It is originally +defined in Zechar et al (2010).

+

Method: Similar to the M-test, but in this case we sum over all +magnitude bins.

+
+\[\hat{\boldsymbol{\Omega}^s} = \{\omega^s(s_j)| s_j \in \boldsymbol{S}\},\]
+

where

+
+\[\omega^s(s_j) = \sum_{m_i \in \boldsymbol{M}} \omega(m_i, s_j),\]
+

and

+
+\[\boldsymbol{\Lambda}^s = \{ \lambda^s(s_j)| s_j \in \boldsymbol{S} \},\]
+

where

+
+\[\lambda^s(s_j) = \frac{N_{obs}}{N_{fore}}\sum_{m_i \in M} \lambda(m_i, s_j).\]
+

Then we compute the joint log-likelihood as we did for the L-test or the +M-test:

+
+\[S = L(\boldsymbol{\Omega}^s | \boldsymbol{\Lambda}^s)\]
+

We then wish to compare this with the distribution of simulated +log-likelihoods, this time keeping the number of events fixed to +\(N_{obs}\). Then for each simulated catalogue, +\(\hat{S}_x = L(\hat{\boldsymbol{\Omega}}^s | \boldsymbol{\Lambda}^s)\)

+

The final test statistic is again the fraction of observed log +likelihoods within the range of the simulated log likelihood values:

+
+\[\zeta = \frac{ |\{ \hat{S_x} | \hat{S_x} \le S\} |}{|\{ \hat{S} \}|}\]
+

and again the distinction between a forecast passing or failing the test +depends on our significance level.

+

pyCSEP implementation

+

The S-test is again a one-sided test, so we specify this when plotting +the result.

+
spatial_test_result = poisson.spatial_test(
+    helmstetter,
+    catalog,
+    seed=seed,
+    num_simulations=nsim
+)
+ax = plots.plot_poisson_consistency_test(
+    spatial_test_result,
+    one_sided_lower=True,
+    plot_args = {'xlabel':'normalized spatial likelihood'}
+)
+
+
+../_images/output_19_0.png +

The Helmstetter model fails the S-test as the observed spatial +likelihood falls in the tail of the simulated likelihood distribution. +Again this is shown by a coloured symbol which highlights whether the +forecast model passes or fails the test.

+
+
+
+

Forecast comparison tests

+

The consistency tests above check whether a forecast is consistent with +observations, but do not provide a straightforward way to compare two +different forecasts. A few suggestions for this focus on the information +gain of one forecast relative to another (Harte and Vere-Jones 2005, +Imoto and Hurukawa, 2006, Imoto and Rhoades, 2010, Rhoades et al 2011). +The T-test and W-test implementations for earthquake forecast comparison +are first described in Rhoades et al. (2011).

+

The information gain per earthquake (IGPE) of model A compared to model +B is defined by \(I_{N}(A, B) = R/N\) where R is the rate-corrected +log-likelihood ratio of models A and B gven by

+
+\[R = \sum_{k=1}^{N}\big{(}\log\lambda_A(i_k) - \log \lambda_B(i_k)\big{)} - \big{(}\hat{N}_A - \hat{N}_B\big{)}\]
+

If we set \(X_i=\log\lambda_A(k_i)\) and +\(Y_i=\log\lambda_B(k_i)\) then we can define the information gain +per earthquake (IGPE) as

+
+\[I_N(A, B) = \frac{1}{N}\sum^N_{i=1}\big{(}X_i - Y_i\big{)} - \frac{\hat{N}_A - \hat{N}_B}{N}\]
+

If \(I(A, B)\) differs significantly from 0, the model with the +lower likelihood can be rejected in favour of the other.

+

t-test

+

If \(X_i - Y_i\) are independent and come from the same normal +population with mean \(\mu\) then we can use the classic paired +t-test to evaluate the null hypothesis that +\(\mu = (\hat{N}_A - \hat{N}_B)/N\) against the alternative +hypothesis \(\mu \ne (\hat{N}_A - \hat{N}_B)/N\). To implement this, +we let \(s\) denote the sample variance of \((X_i - Y_i)\) such +that

+
+\[s^2 = \frac{1}{N-1}\sum^N_{i=1}\big{(}X_i - Y_i\big{)}^2 - \frac{1}{N^2 - N}\bigg{(}\sum^N_{i=1}\big{(}X_i - Y_i\big{)}\bigg{)}^2\]
+

Under the null hypothesis +\(T = I_N(A, B)\big{/}\big{(}s/\sqrt{N}\big{)}\) has a +t-distribution with \(N-1\) degrees of freedom and the null +hypothesis can be rejected if \(|T|\) exceeds a critical value of +the \(t_{N-1}\) distribution. The confidence intervals for +\(\mu - (\hat{N}_A - \hat{N}_B)/N\) can then be constructed with the +form \(I_N(A,B) \pm ts/\sqrt{N}\) where t is the appropriate +quantile of the \(t_{N-1}\) distribution.

+

W-test

+

An alternative to the t-test is the Wilcoxan signed-rank test or W-test. +This is a non-parameteric alternative to the t-test which can be used if +we do not feel the assumption of normally distributed differences in +\(X_i - Y_i\) is valid. This assumption might b particularly poor +when we have small sample sizes. The W-test instead depends on the +(weaker) assumption that \(X_i - Y_i\) is symmetric and tests +whether the meadian of \(X_i - Y_i\) is equal to +\((\hat{N}_A - \hat{N}_B)/N\). The W-test is less powerful than the +T-test for normally distributed differences and cannot reject the null +hypothesis (with \(95\%\) confidence) for very small sample sizes +(\(N \leq 5\)).

+

The t-test becomes more accurate as \(N \rightarrow \infty\) due to +the central limit theorem and therefore the t-test is considered +dependable for large \(N\). Where \(N\) is small, a model might +only be considered more informative if both the t- and W-test results +agree.

+

Implementation in pyCSEP

+

The t-test and W-tests are implemented in pyCSEP as below.

+
helmstetter_ms = csep.load_gridded_forecast(
+    datasets.helmstetter_mainshock_fname,
+    name = "Helmstetter Mainshock"
+)
+
+t_test = poisson.paired_t_test(helmstetter, helmstetter_ms, catalog)
+w_test = poisson.w_test(helmstetter, helmstetter_ms, catalog)
+comp_args = {'title': 'Paired T-test Result',
+             'ylabel': 'Information gain',
+             'xlabel': '',
+             'xticklabels_rotation': 0,
+             'figsize': (6,4)}
+
+ax = plots.plot_comparison_test([t_test], [w_test], plot_args=comp_args)
+
+
+../_images/output_22_0.png +

The first argument to the paired_t_test function is taken as model A +and the second as our basline model, or model B. When plotting the +result, the horizontal dashed line indicates the performance of model B +and the vertical bar shows the confidence bars for the information gain +\(I_N(A, B)\) associated with model A relative to model B. In this +case, the model with aftershocks performs statistically worse than the +benchmark model. We note that this comparison is used for demonstation +purposes only.

+
+
+
+

Catalog-based forecast tests

+

Catalog-based forecast tests evaluate forecasts using simulated outputs +in the form of synthetic earthquake catalogs. Thus, removing the need +for the Poisson approximation and simulation procedure used with +grid-based forecasts. We know that observed seismicity is overdispersed +with respect to a Poissonian model due to spatio-temporal clustering. +Overdispersed models are more likely to be rejected by the original +Poisson-based CSEP tests (Werner et al, 2011a). This modification of the +testing framework allows for a broader range of forecast models. The +distribution of realizations is then compared with observations, similar +to in the grid-based case. These tests were developed by Savran et al +2020, who applied them to test forecasts following the 2019 Ridgecrest +earthquake in Southern California.

+

In the following text, we show how catalog-based forecasts are defined. +Again we begin by defining a region \(\boldsymbol{R}\) as a function +of some magnitude range \(\boldsymbol{M}\), spatial domain +\(\boldsymbol{S}\) and time period \(\boldsymbol{T}\)

+
+\[\boldsymbol{R} = \boldsymbol{M} \times \boldsymbol{S} \times \boldsymbol{T}.\]
+

An earthquake \(e\) can be described by a magnitude \(m_i\) at +some location \(s_j\) and time \(t_k\). A catalog is simply a +collection of earthquakes, thus the observed catalog can be written as

+
+\[\Omega = \big{\{}e_n \big{|} n= 1...N_{obs}; e_n \in \boldsymbol{R} \big{\}},\]
+

and a forecast is then specified as a collection of synthetic catalogs +containing events \(\hat{e}_{nj}\) in domain \(\boldsymbol{R}\), +as

+
+\[\boldsymbol{\Lambda} \equiv \Lambda_j = \{\hat{e}_{nj} | n = 1... N_j, j= 1....J ;\hat{e}_{nj} \in \boldsymbol{R} \}.\]
+

That is, a forecast consists of \(J\) simulated catalogs each +containing \(N_j\) events, described in time, space and magnitude +such that \(\hat{e}_{nj}\) describes the \(n\)th synthetic +event in the \(j\)th synthetic catalog \(\Lambda_j\)

+

When using simulated forecasts in pyCSEP, we must first explicitly +specify the forecast region by specifying the spatial domain and +magnitude regions as below. In effect, these are filters applied to the +forecast and observations to retain only the events in +\(\boldsymbol{R}\). The examples in this section are catalog-based +forecast simulations for the Landers earthquake and aftershock sequence +generated using UCERF3-ETAS (Field et al, 2017).

+
# Define the start and end times of the forecasts
+start_time = time_utils.strptime_to_utc_datetime("1992-06-28 11:57:35.0")
+end_time = time_utils.strptime_to_utc_datetime("1992-07-28 11:57:35.0")
+
+# Magnitude bins properties
+min_mw = 4.95
+max_mw = 8.95
+dmw = 0.1
+
+# Create space and magnitude regions.
+magnitudes = regions.magnitude_bins(min_mw, max_mw, dmw)
+region = regions.california_relm_region()
+space_magnitude_region = regions.create_space_magnitude_region(
+    region,
+    magnitudes
+)
+
+# Load forecast
+forecast = csep.load_catalog_forecast(
+    datasets.ucerf3_ascii_format_landers_fname,
+    start_time = start_time,
+    end_time = end_time,
+    region = space_magnitude_region,
+    apply_filters = True
+)
+
+# Compute expected rates
+forecast.filters = [
+    f'origin_time >= {forecast.start_epoch}',
+    f'origin_time < {forecast.end_epoch}'
+]
+_ = forecast.get_expected_rates(verbose=False)
+
+# Obtain Comcat catalog and filter to region
+comcat_catalog = csep.query_comcat(
+    start_time,
+    end_time,
+    min_magnitude=forecast.min_magnitude
+)
+
+# Filter observed catalog using the same region as the forecast
+comcat_catalog = comcat_catalog.filter_spatial(forecast.region)
+
+
+
Fetched ComCat catalog in 0.31937098503112793 seconds.
+
+Downloaded catalog from ComCat with following parameters
+Start Date: 1992-06-28 12:00:45+00:00
+End Date: 1992-07-24 18:14:36.250000+00:00
+Min Latitude: 33.901 and Max Latitude: 36.705
+Min Longitude: -118.067 and Max Longitude: -116.285
+Min Magnitude: 4.95
+Found 19 events in the ComCat catalog.
+
+
+
+

Number Test

+

Aim: As above, the number test aims to evaluate if the number of +observed events is consistent with the forecast.

+

Method: The observed statistic in this case is given by +\(N_{obs} = |\Omega|\), which is simply the number of events in the +observed catalog. To build the test distribution from the forecast, we +simply count the number of events in each simulated catalog.

+
+\[N_{j} = |\Lambda_c|; j = 1...J\]
+

As in the gridded test above, we can then evaluate the probabilities of +at least and at most N events, in this case using the empirical +cumlative distribution function of \(F_N\):

+
+\[\delta_1 = P(N_j \geq N_{obs}) = 1 - F_N(N_{obs}-1)\]
+

and

+
+\[\delta_2 = P(N_j \leq N_{obs}) = F_N(N_{obs})\]
+

Implementation in pyCSEP

+
number_test_result = catalog_evaluations.number_test(
+    forecast,
+    comcat_catalog,
+    verbose=False
+)
+ax = number_test_result.plot()
+
+
+../_images/output_27_0.png +

Plotting the number test result of a simulated catalog forecast displays +a histogram of the numbers of events \(\hat{N}_j\) in each simulated +catalog \(j\), which makes up the test distribution. The test +statistic is shown by the dashed line - in this case it is the number of +observed events in the catalog \(N_{obs}\).

+
+
+

Magnitude Test

+

Aim: The magnitude test aims to test the consistency of the observed +frequency-magnitude distribution with that in the simulated catalogs +that make up the forecast.

+

Method: The catalog-based magnitude test is implemented quite +differently to the grid-based equivalent. We first define the union +catalog \(\Lambda_U\) as the union of all simulated catalogs in the +forecast. Formally:

+
+\[\Lambda_U = \{ \lambda_1 \cup \lambda_2 \cup ... \cup \lambda_j \}\]
+
+
so that the union catalog contains all events across all simulated +catalogs for a total of +\(N_U = \sum_{j=1}^{J} \big{|}\lambda_j\big{|}\) events.
+
We then compute the following histograms discretised to the magnitude +range and magnitude step size (specified earlier for pyCSEP): 1. the +histogram of the union catalog magnitudes \(\Lambda_U^{(m)}\) 2. +Histograms of magnitudes in each of the individual simulated catalogs +\(\lambda_j^{(m)}\) 3. the histogram of the observed catalog +magnitudes \(\Omega^{(m)}\)
+
+

The histograms are normalized so that the total number of events across +all bins is equal to the observed number. The observed statistic is then +calculated as the sum of squared logarithmic residuals between the +normalised observed magnitudes and the union histograms. This statistic +is related to the Kramer von-Mises statistic.

+
+\[d_{obs}= \sum_{k}\Bigg(\log\Bigg[\frac{N_{obs}}{N_U} \Lambda_U^{(m)}(k) + 1\Bigg]- \log\Big[\Omega^{(m)}(k) + 1\Big]\Bigg)^2\]
+

where \(\Lambda_U^{(m)}(k)\) and \(\Omega^{(m)}(k)\) +represent the count in the \(k\)th bin of the magnitude-frequency +distribution in the union and observed catalogs respectively. We add +unity to each bin to avoid \(\log(0)\). We then build the test +distribution from the catalogs in \(\boldsymbol{\Lambda}\):

+
+\[D_j = \sum_{k}\Bigg(\log\Bigg[\frac{N_{obs}}{N_U} \Lambda_U^{(m)}(k) + 1\Bigg]- \log\Bigg[\frac{N_{obs}}{N_j}\Lambda_j^{(m)}(k) + 1\Bigg]\Bigg)^2; j= 1...J\]
+

where \(\lambda_j^{(m)}(k)\) represents the count in the +\(k\)th bin of the magnitude-frequency distribution of the +\(j\)th catalog.

+

The quantile score can then be calculated using the empirical CDF such +that

+
+\[\gamma_m = F_D(d_{obs})= P(D_j \leq d_{obs})\]
+
+
+
Implementation in pyCSEP
+
+
Hopefully you now see why it was necessary to specify our magnitude +range explicitly when we set up the catalog-type testing - we need to +makes sure the magnitudes are properly discretised for the model we +want to test.
+
+
magnitude_test_result = catalog_evaluations.magnitude_test(
+    forecast,
+    comcat_catalog,verbose=False
+)
+ax = magnitude_test_result.plot(plot_args={'xy': (0.6,0.7)})
+
+
+../_images/output_30_0.png +

The histogram shows the resulting test distribution with \(D^*\) +calculated for each simulated catalog as described in the method above. +The test statistic \(\omega = d_{obs}\) is shown with the dashed +horizontal line. The quantile score for this forecast is +\(\gamma = 0.66\).

+
+
+

Pseudo-likelihood test

+

Aim : The pseudo-likelihood test aims to evaluate the likelihood of a +forecast given an observed catalog.

+

Method : The pseudo-likelihood test has similar aims to the grid-based +likelihood test above, but its implementation differs in a few +significant ways. Firstly, it does not compute an actual likelihood +(hence the name pseudo-likelihood), and instead of aggregating over +cells as in the grid-based case, the pseudo-likelihood test aggregates +likelihood over target event likelihood scores (so likelihood score per +target event, rather than likelihood score per grid cell). The most +important difference, however, is that the pseudo-likelihood tests do +not use a Poisson likelihood.

+

The pseudo-likelihood approach is based on the continuous point process +likelihood function. A continuous marked space-time point process can be +specified by a conditional intensity function +\(\lambda(\boldsymbol{e}|H_t)\), in which \(H_t\) describes the +history of the process in time. The log-likelihood function for any +point process in \(\boldsymbol{R}\) is given by

+
+\[L = \sum_{i=1}^{N} \log \lambda(e_i|H_t) - \int_{\boldsymbol{R}}\lambda(\boldsymbol{e}|H_t)d\boldsymbol{R}\]
+

Not all models will have an explicit likelihood function, so instead we +approximate the expectation of \(\lambda(e|H_t)\) using the forecast +catalogs. The approximate rate density is defined as the conditional +expectation given a discretised region \(R_d\) of the continuous +rate

+
+\[\hat{\lambda}(\boldsymbol{e}|H_t) = E\big[\lambda(\boldsymbol{e}|H_t)|R_d\big]\]
+

We still regard the model as continuous, but the rate density is +approximated within a single cell. This is analogous to the gridded +approach where we count the number of events in discrete cells. The +pseudo-loglikelihood is then

+
+\[\hat{L} = \sum_{i=1}^N \log \hat{\lambda}(e_i|H_t) - \int_R \hat{\lambda}(\boldsymbol{e}|H_t) dR\]
+

and we can write the approximate rate density as

+
+\[\hat{\lambda}(\boldsymbol{e}|H_t) = \sum_M \hat{\lambda}(\boldsymbol{e}|H_t),\]
+

where we take the sum over all magnitude bins \(M\). We can +calculate observed pseudolikelihood as

+
+\[\hat{L}_{obs} = \sum_{i=1}^{N_{obs}} \log \hat{\lambda}_s(k_i) - \bar{N},\]
+

where \(\hat{\lambda}_s(k_i)\) is the approximate rate density in +the \(k\)th spatial cell and \(k_i\) denotes the spatil cell +in which the \(i\)th event occurs. \(\bar{N}\) is the expected +number of events in \(R_d\). Similarly, we calculate the test +distribution as

+
+\[\hat{L}_{j} = \Bigg[\sum_{i=1}^{N_{j}} \log\hat{\lambda}_s(k_{ij}) - \bar{N}\Bigg]; j = 1....J,\]
+

where \(\hat{\lambda}_s(k_{ij})\) describes the approximate rate +density of the \(i\)th event in the \(j\)th catalog. We can +then calculate the quantile score as

+
+\[\gamma_L = F_L(\hat{L}_{obs})= P(\hat{L}_j \leq \hat{L}_{obs}).\]
+

Implementation in pyCSEP

+
pseudolikelihood_test_result = catalog_evaluations.pseudolikelihood_test(
+    forecast,
+    comcat_catalog,
+    verbose=False
+)
+ax = pseudolikelihood_test_result.plot()
+
+
+../_images/output_33_0.png +

The histogram shows the test distribution of pseudolikelihood as +calculated above for each catalog \(j\). The dashed vertical line +shows the observed statistic \(\hat{L}_{obs} = \omega\). It is clear +that the observed statistic falls within the critical region of test +distribution, as reflected in the quantile score of +\(\gamma_L = 0.02\).

+
+
+

Spatial test

+

Aim: The spatial test again aims to isolate the spatial component of the +forecast and test the consistency of spatial rates with observed events.

+

Method We perform the spatial test in the catalog-based approach in a +similar way to the grid-based spatial test approach: by normalising the +approximate rate density. In this case, we use the normalisation +\(\hat{\lambda}_s = \hat{\lambda}_s \big/ \sum_{R} \hat{\lambda}_s\). +Then the observed spatial test statistic is calculated as

+
+\[S_{obs} = \Bigg[\sum_{i=1}^{N_{obs}} \log \hat{\lambda}_s^*(k_i)\Bigg]N_{obs}^{-1}\]
+

in which \(\hat{\lambda}_s^*(k_i)\) is the normalised approximate +rate density in the \(k\)th cell corresponding to the +\(i\)th event in the observed catalog \(\Omega\). Similarly, +we define the test distribution using

+
+\[S_{c} = \bigg[\sum_{i=1}^{N_{j}} \log \hat{\lambda}_s^*(k_{ij})\bigg]N_{j}^{-1}; j= 1...J\]
+

for each catalog j. Finally, the quantile score for the spatial test is +determined by once again comparing the observed and test distribution +statistics:

+
+\[\gamma_s = F_s(\hat{S}_{obs}) = P (\hat{S}_j \leq \hat{S}_{obs})\]
+

Implementation in pyCSEP

+
spatial_test_result = catalog_evaluations.spatial_test(
+    forecast,
+    comcat_catalog,
+    verbose=False
+)
+ax = spatial_test_result.plot()
+
+
+../_images/output_36_0.png +

The histogram shows the test distribution of normalised +pseduo-likelihood computed for each simulated catalog \(j\). The +dashed vertical line shows the observed test statistic +\(s_{obs} = \omega = -5.88\), which is clearly within the test +distribution. The quantile score \(\gamma_s = 0.36\) is also printed +on the figure by default.

+
+
+
+

References

+

Field, E. H., K. R. Milner, J. L. Hardebeck, M. T. Page, N. J. van der +Elst, T. H. Jordan, A. J. Michael, B. E. Shaw, and M. J. Werner (2017). +A spatiotemporal clustering model for the third Uniform California +Earthquake Rupture Forecast (UCERF3-ETAS): Toward an operational +earthquake forecast, Bull. Seismol. Soc. Am. 107, 1049–1081.

+

Harte, D., and D. Vere-Jones (2005), The entropy score and its uses in +earthquake forecasting, Pure Appl. Geophys. 162 , 6-7, 1229-1253, DOI: +10.1007/ s00024-004-2667-2.

+

Helmstetter, A., Y. Y. Kagan, and D. D. Jackson (2006). Comparison of +short-term and time-independent earthquake forecast models for southern +California, Bulletin of the Seismological Society of America 96 90-106.

+

Imoto, M., and N. Hurukawa (2006), Assessing potential seismic activity +in Vrancea, Romania, using a stress-release model, Earth Planets Space +58 , 1511-1514.

+

Imoto, M., and D.A. Rhoades (2010), Seismicity models of moderate +earthquakes in Kanto, Japan utilizing multiple predictive parameters, +Pure Appl. Geophys. 167, 6-7, 831-843, DOI: 10.1007/s00024-010-0066-4.

+

Rhoades, D.A, D., Schorlemmer, M.C.Gerstenberger, A. Christophersen, J. +D. Zechar & M. Imoto (2011) Efficient testing of earthquake forecasting +models, Acta Geophysica 59

+

Savran, W., M. J. Werner, W. Marzocchi, D. Rhoades, D. D. Jackson, K. R. +Milner, E. H. Field, and A. J. Michael (2020). Pseudoprospective +evaluation of UCERF3-ETAS forecasts during the 2019 Ridgecrest Sequence, +Bulletin of the Seismological Society of America.

+

Schorlemmer, D., and M.C. Gerstenberger (2007), RELM testing center, +Seismol. Res. Lett. 78, 30–36.

+

Schorlemmer, D., M.C. Gerstenberger, S. Wiemer, D.D. Jackson, and D.A. +Rhoades (2007), Earthquake likelihood model testing, Seismol. Res. Lett. +78, 17–29.

+

Schorlemmer, D., A. Christophersen, A. Rovida, F. Mele, M. Stucci and W. +Marzocchi (2010a). Setting up an earthquake forecast experiment in +Italy, Annals of Geophysics, 53, no.3

+

Schorlemmer, D., J.D. Zechar, M.J. Werner, E.H. Field, D.D. Jackson, and +T.H. Jordan (2010b), First results of the Regional Earthquake Likelihood +Models experiment, Pure Appl. Geophys., 167, 8/9, +doi:10.1007/s00024-010-0081-5.

+

M. Taroni, W. Marzocchi, D. Schorlemmer, M. J. Werner, S. Wiemer, J. D. +Zechar, L. Heiniger, F. Euchner; Prospective CSEP Evaluation of 1‐Day, +3‐Month, and 5‐Yr Earthquake Forecasts for Italy. Seismological Research +Letters 2018;; 89 (4): 1251–1261. doi: +https://doi.org/10.1785/0220180031

+

Werner, M. J., A. Helmstetter, D. D. Jackson, and Y. Y. Kagan (2011a). +High-Resolution Long-Term and Short-Term Earthquake Forecasts for +California, Bulletin of the Seismological Society of America 101 +1630-1648

+

Werner, M.J. J.D. Zechar, W. Marzocchi, and S. Wiemer (2011b), +Retrospective evaluation of the five-year and ten-year CSEP-Italy +earthquake forecasts, Annals of Geophysics 53, no. 3, 11–30, +doi:10.4401/ag-4840.

+

Zechar, 2011: Evaluating earthquake predictions and earthquake +forecasts: a guide for students and new researchers, CORSSA +(http://www.corssa.org/en/articles/theme_6/)

+

Zechar, J.D., M.C. Gerstenberger, and D.A. Rhoades (2010a), +Likelihood-based tests for evaluating space-rate-magnitude forecasts, +Bull. Seis. Soc. Am., 100(3), 1184—1195, doi:10.1785/0120090192.

+

Zechar, J.D., D. Schorlemmer, M. Liukis, J. Yu, F. Euchner, P.J. +Maechling, and T.H. Jordan (2010b), The Collaboratory for the Study of +Earthquake Predictability perspective on computational earthquake +science, Concurr. Comp-Pract. E., doi:10.1002/cpe.1519.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 00000000..1644c879 --- /dev/null +++ b/index.html @@ -0,0 +1,236 @@ + + + + + + + + + pyCSEP: Tools for Earthquake Forecast Developers — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

pyCSEP: Tools for Earthquake Forecast Developers

+
+
+
+
+
+
+
+
+

PyCSEP tools help earthquake forecast model developers evaluate their forecasts and provide the machinery to implement +experiments within CSEP testing centers.

+
+

About

+

The Collaboratory for the Study of Earthquake Predictability (CSEP) supports an international effort to conduct earthquake +forecasting experiments. CSEP supports these activities by developing the cyberinfrastructure necessary to run earthquake +forecasting experiments including the statistical framework required to evaluate probabilistic earthquake forecasts.

+

PyCSEP is a python library that provides tools for (1) evaluating probabilistic earthquake forecasts, (2) working +with earthquake catalogs in this context, and (3) creating visualizations. Official experiments that run in CSEP testing centers +will be implemented using the code provided by this package.

+
+
+

Project Goals

+
    +
  1. Help modelers become familiar with formats, procedures, and evaluations used in CSEP Testing Centers.

  2. +
  3. Provide vetted software for model developers to use in their research.

  4. +
  5. Provide quantitative and visual tools to assess earthquake forecast quality.

  6. +
  7. Promote open-science ideas by ensuring transparency and availability of scientific code and results.

  8. +
  9. Curate benchmark models and data sets for modelers to conduct retrospective experiments of their forecasts.

  10. +
+
+
+

Contributing

+

We highly encourage users of this package to get involved in the development process. Any contribution is helpful, even +suggestions on how to improve the package, or additions to the documentation (those are particularly welcome!). Check out +the Contribution guidelines for a step by step on how to contribute to the project. If there are +any questions, please contact us!

+
+
+

Contacting Us

+
    +
  • For general discussion and bug reports please post issues on the pyCSEP GitHub.

  • +
  • This project adheres to a Code of Conduct. By participating you agree to follow its terms.

  • +
+
+
+

List of Contributors

+
    +
  • Fabio Silva, Southern California Earthquake Center

  • +
  • Philip Maechling, Southern California Earthquake Center

  • +
  • William Savran, University of Nevada, Reno

  • +
  • Pablo Iturrieta, GFZ Potsdam

  • +
  • Khawaja Asim, GFZ Potsdam

  • +
  • Han Bao, University of California, Los Angeles

  • +
  • Kirsty Bayliss, University of Edinburgh

  • +
  • Jose Bayona, University of Bristol

  • +
  • Thomas Beutin, GFZ Potsdam

  • +
  • Marcus Hermann, University of Naples ‘Frederico II’

  • +
  • Edric Pauk, Southern California Earthquake Center

  • +
  • Max Werner, University of Bristol

  • +
  • Danijel Schorlemmner, GFZ Potsdam

  • +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/objects.inv b/objects.inv new file mode 100644 index 00000000..ba22b261 Binary files /dev/null and b/objects.inv differ diff --git a/py-modindex.html b/py-modindex.html new file mode 100644 index 00000000..f0f65e30 --- /dev/null +++ b/py-modindex.html @@ -0,0 +1,242 @@ + + + + + + + + Python Module Index — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + +

Python Module Index

+ +
+ c +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
 
+ c
+ csep +
    + csep.core.catalog_evaluations +
    + csep.core.catalogs +
    + csep.core.forecasts +
    + csep.core.poisson_evaluations +
    + csep.core.regions +
    + csep.utils.basic_types +
    + csep.utils.calc +
    + csep.utils.comcat +
    + csep.utils.plots +
    + csep.utils.stats +
    + csep.utils.time_utils +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/api_reference.html b/reference/api_reference.html new file mode 100644 index 00000000..aa8f94ab --- /dev/null +++ b/reference/api_reference.html @@ -0,0 +1,917 @@ + + + + + + + + + API Reference — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

API Reference

+

This contains a reference document to the PyCSEP API.

+
+

Loading catalogs and forecasts

+ + + + + + + + + + + + + + + + + + + + + +

load_stochastic_event_sets(filename[, type, ...])

General function to load stochastic event sets

load_catalog(filename[, type, format, ...])

General function to load single catalog

query_comcat(start_time, end_time[, ...])

Access Comcat catalog through web service

query_bsi(start_time, end_time[, ...])

Access BSI catalog through web service

load_gridded_forecast(fname[, loader])

Loads grid based forecast from hard-disk.

load_catalog_forecast(fname[, ...])

General function to handle loading catalog forecasts.

+
+
+

Catalogs

+

Catalog operations are defined using AbstractBaseCatalog class.

+ + + + + + + + + + + + +

AbstractBaseCatalog([filename, data, ...])

Abstract catalog base class for PyCSEP catalogs.

CSEPCatalog(**kwargs)

Standard catalog class for PyCSEP catalog operations.

UCERF3Catalog(**kwargs)

Catalog written from UCERF3-ETAS binary format

+
+
+

Catalog operations

+

Input and output operations for catalogs:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

CSEPCatalog.to_dict()

Serializes class to json dictionary.

CSEPCatalog.from_dict(adict, **kwargs)

Creates a class from the dictionary representation of the class state.

CSEPCatalog.to_dataframe([with_datetime])

Returns pandas Dataframe describing the catalog.

CSEPCatalog.from_dataframe(df, **kwargs)

Creates catalog from dataframe.

CSEPCatalog.write_json(filename)

Writes catalog to json file

CSEPCatalog.load_json(filename, **kwargs)

Loads catalog from JSON file

CSEPCatalog.load_catalog(filename[, loader])

Loads catalog stored in CSEP1 ascii format

CSEPCatalog.write_ascii(filename[, ...])

Write catalog in csep2 ascii format.

CSEPCatalog.load_ascii_catalogs(filename, ...)

Loads multiple catalogs in csep-ascii format.

CSEPCatalog.get_csep_format()

Returns CSEP format for a catalog

CSEPCatalog.plot([ax, show, extent, ...])

Plot catalog according to plate-carree projection

+

Accessing event information:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + +

CSEPCatalog.event_count

Number of events in catalog

CSEPCatalog.get_magnitudes()

Returns magnitudes of all events in catalog

CSEPCatalog.get_longitudes()

Returns longitudes of all events in catalog

CSEPCatalog.get_latitudes()

Returns latitudes of all events in catalog

CSEPCatalog.get_depths()

Returns depths of all events in catalog

CSEPCatalog.get_epoch_times()

Returns the datetime of the event as the UTC epoch time (aka unix timestamp)

CSEPCatalog.get_datetimes()

Returns datetime object from timestamp representation in catalog

CSEPCatalog.get_cumulative_number_of_events()

Returns the cumulative number of events in the catalog.

+

Filtering and binning:

+ + + + + + + + + + + + + + + + + + + + + +

CSEPCatalog.filter([statements, in_place])

Filters the catalog based on statements.

CSEPCatalog.filter_spatial([region, ...])

Removes events outside of the region.

CSEPCatalog.apply_mct(m_main, event_epoch[, mc])

Applies time-dependent magnitude of completeness following a mainshock.

CSEPCatalog.spatial_counts()

Returns counts of events within discrete spatial region

CSEPCatalog.magnitude_counts([mag_bins, ...])

Computes the count of events within mag_bins

CSEPCatalog.spatial_magnitude_counts([...])

Return counts of events in space-magnitude region.

+

Other utilities:

+ + + + + + + + + + + + +

CSEPCatalog.update_catalog_stats()

Compute summary statistics of events in catalog

CSEPCatalog.length_in_seconds()

Returns catalog length in seconds assuming that the catalog is sorted by time.

CSEPCatalog.get_bvalue([mag_bins, return_error])

Estimates the b-value of a catalog from Marzocchi and Sandri (2003).

+
+
+

Forecasts

+

PyCSEP provides classes to interact with catalog and grid based Forecasts

+ + + + + + + + + +

GriddedForecast([start_time, end_time])

Class to represent grid-based forecasts

CatalogForecast([filename, catalogs, name, ...])

Catalog based forecast defined as a family of stochastic event sets.

+

Gridded forecast methods:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

GriddedForecast.data

Contains the spatio-magnitude forecast as 2d numpy.ndarray.

GriddedForecast.event_count

Returns a sum of the forecast data

GriddedForecast.sum()

Sums over all of the forecast data

GriddedForecast.magnitudes

GriddedForecast.min_magnitude

Returns the lowest magnitude bin edge

GriddedForecast.magnitude_counts()

Returns counts of events in magnitude bins

GriddedForecast.spatial_counts([cartesian])

Integrates over magnitudes to return the spatial version of the forecast.

GriddedForecast.get_latitudes()

Returns the latitude of the lower left node of the spatial grid

GriddedForecast.get_longitudes()

Returns the lognitude of the lower left node of the spatial grid

GriddedForecast.get_magnitudes()

Returns the left edge of the magnitude bins.

GriddedForecast.get_index_of(lons, lats)

Returns the index of lons, lats in spatial region

GriddedForecast.get_magnitude_index(mags[, tol])

Returns the indices into the magnitude bins of selected magnitudes

GriddedForecast.load_ascii(ascii_fname[, ...])

Reads Forecast file from CSEP1 ascii format.

GriddedForecast.from_custom(func[, func_args])

Creates MarkedGriddedDataSet class from custom parsing function.

GriddedForecast.get_rates(lons, lats, mags)

Returns the rate associated with a longitude, latitude, and magnitude.

GriddedForecast.target_event_rates(...[, scale])

Generates data set of target event rates given a target data.

GriddedForecast.scale_to_test_date(test_datetime)

Scales forecast data by the fraction of the date.

GriddedForecast.plot([ax, show, log, ...])

Plot gridded forecast according to plate-carree projection

+

Catalog forecast methods:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + +

CatalogForecast.magnitudes

Returns left bin-edges of magnitude bins

CatalogForecast.min_magnitude

Returns smallest magnitude bin edge of forecast

CatalogForecast.spatial_counts([cartesian])

Returns the expected spatial counts from forecast

CatalogForecast.magnitude_counts()

Returns expected magnitude counts from forecast

CatalogForecast.get_expected_rates([verbose])

Compute the expected rates in space-magnitude bins

CatalogForecast.get_dataframe()

Return a single dataframe with all of the events from all of the catalogs.

CatalogForecast.write_ascii(fname[, header, ...])

Writes data forecast to ASCII format

CatalogForecast.load_ascii(fname, **kwargs)

Loads ASCII format for data forecast.

+
+
+

Evaluations

+

PyCSEP provides implementations of evaluations for both catalog-based forecasts and grid-based forecasts.

+

Catalog-based forecast evaluations:

+ + + + + + + + + + + + + + + + + + +

number_test(forecast, observed_catalog[, ...])

Performs the number test on a catalog-based forecast.

spatial_test(forecast, observed_catalog[, ...])

Performs spatial test for catalog-based forecasts.

magnitude_test(forecast, observed_catalog[, ...])

Performs magnitude test for catalog-based forecasts

pseudolikelihood_test(forecast, observed_catalog)

Performs the spatial pseudolikelihood test for catalog forecasts.

calibration_test(evaluation_results[, delta_1])

Perform the calibration test by computing a Kilmogorov-Smirnov test of the observed quantiles against a uniform distribution.

+

Grid-based forecast evaluations:

+ + + + + + + + + + + + + + + + + + + + + + + + +

number_test(gridded_forecast, observed_catalog)

Computes "N-Test" on a gridded forecast.

magnitude_test(gridded_forecast, ...[, ...])

Performs the Magnitude Test on a Gridded Forecast using an observed catalog.

spatial_test(gridded_forecast, observed_catalog)

Performs the Spatial Test on the Forecast using the Observed Catalogs.

likelihood_test(gridded_forecast, ...[, ...])

Performs the likelihood test on Gridded Forecast using an Observed Catalog.

conditional_likelihood_test(...[, ...])

Performs the conditional likelihood test on Gridded Forecast using an Observed Catalog.

paired_t_test(forecast, benchmark_forecast, ...)

Computes the t-test for gridded earthquake forecasts.

w_test(gridded_forecast1, gridded_forecast2, ...)

Calculate the Single Sample Wilcoxon signed-rank test between two gridded forecasts.

+
+
+

Regions

+

PyCSEP includes commonly used CSEP testing regions and classes that facilitate working with gridded data sets. This +module is early in development and help is welcome here!

+

Region class(es):

+ + + + + + +

CartesianGrid2D(polygons, dh[, name, mask])

Represents a 2D cartesian gridded region.

+

Testing regions:

+ + + + + + + + + + + + +

california_relm_region([dh_scale, ...])

Returns class representing California testing region.

italy_csep_region([dh_scale, magnitudes, ...])

Returns class representing Italian testing region.

global_region([dh, name, magnitudes])

Creates a global region used for evaluating gridded forecasts on the global scale.

+

Region utilities:

+ + + + + + + + + + + + + + + + + + + + + + + + +

magnitude_bins(start_magnitude, ...)

Returns array holding magnitude bin edges.

create_space_magnitude_region(region, magnitudes)

Simple wrapper to create space-magnitude region

parse_csep_template(xml_filename)

Reads CSEP XML template file and returns the lat/lon values for the forecast.

increase_grid_resolution(points, dh, factor)

Takes a set of origin points and returns a new set with higher grid resolution.

masked_region(region, polygon)

Build a new region based off the coordinates in the polygon.

generate_aftershock_region(mainshock_mw, ...)

Creates a spatial region around a given epicenter

california_relm_region([dh_scale, ...])

Returns class representing California testing region.

+
+
+

Plotting

+

General plotting:

+ + + + + + + + + + + + + + + + + + +

plot_histogram(simulated, observation[, ...])

Plots histogram of single statistic for stochastic event sets and observations.

plot_ecdf(x, ecdf[, axes, xv, show, plot_args])

Plots empirical cumulative distribution function.

plot_basemap(basemap, extent[, ax, figsize, ...])

Wrapper function for multiple cartopy base plots, including access to standard raster webservices

plot_spatial_dataset(gridded, region[, ax, ...])

Plot spatial dataset such as data from a gridded forecast

add_labels_for_publication(figure[, style, ...])

Adds publication labels too the outside of a figure.

+

Plotting from catalogs:

+ + + + + + + + + +

plot_magnitude_versus_time(catalog[, ...])

Plots magnitude versus linear time for an earthquake data.

plot_catalog(catalog[, ax, show, extent, ...])

Plot catalog in a region

+

Plotting stochastic event sets and evaluations:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + +

plot_cumulative_events_versus_time(...[, ...])

Same as below but performs the statistics on numpy arrays without using pandas data frames.

plot_magnitude_histogram(catalogs, comcat[, ...])

Generates a magnitude histogram from a catalog-based forecast

plot_number_test(evaluation_result[, axes, ...])

Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation for the n-test.

plot_magnitude_test(evaluation_result[, ...])

Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation for the M-test.

plot_distribution_test(evaluation_result[, ...])

Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation for the M-test.

plot_likelihood_test(evaluation_result[, ...])

Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation for the L-test.

plot_spatial_test(evaluation_result[, axes, ...])

Plot spatial test result from catalog based forecast

plot_calibration_test(evaluation_result[, ...])

+

Plotting gridded forecasts and evaluations:

+ + + + + + + + + + + + +

plot_spatial_dataset(gridded, region[, ax, ...])

Plot spatial dataset such as data from a gridded forecast

plot_comparison_test(results_t[, results_w, ...])

Plots list of T-Test (and W-Test) Results

plot_poisson_consistency_test(eval_results)

Plots results from CSEP1 tests following the CSEP1 convention.

+
+
+

Time Utilities

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

epoch_time_to_utc_datetime(epoch_time_milli)

Accepts an epoch_time in milliseconds the UTC timezone and returns a python datetime object.

datetime_to_utc_epoch(dt)

Converts python datetime.datetime into epoch_time in milliseconds.

millis_to_days(millis)

Converts time in millis to days

days_to_millis(days)

Converts days to millis

strptime_to_utc_epoch(time_string[, format])

Returns epoch time from formatted time string

timedelta_from_years(time_in_years)

Returns python datetime.timedelta object based on the astronomical year in seconds.

strptime_to_utc_datetime(time_string[, format])

Converts time_string with format into time-zone aware datetime object in the UTC timezone.

utc_now_datetime()

Returns current datetime

utc_now_epoch()

Returns current epoch time

create_utc_datetime(datetime)

Creates TZAware UTC datetime object from unaware object.

decimal_year(test_date)

Convert given test date to the decimal year representation.

+
+
+

Comcat Access

+

We integrated the code developed by Mike Hearne and others at the USGS to reduce the dependencies of this package. We plan +to move this to an external and optional dependency in the future.

+ + + + + + + + + +

search([starttime, endtime, updatedafter, ...])

Search the ComCat database for events matching input criteria. This search function is a wrapper around the ComCat Web API described here: https://earthquake.usgs.gov/fdsnws/event/1/ Some of the search parameters described there are NOT implemented here, usually because they do not apply to GeoJSON search results, which we are getting here and parsing into Python data structures. This function returns a list of SummaryEvent objects, described elsewhere in this package. Usage: TODO: Show usage information. :param starttime: Python datetime - Limit to events on or after the specified start time. :type starttime: datetime :param endtime: Python datetime - Limit to events on or before the specified end time. :type endtime: datetime :param updatedafter: Python datetime - Limit to events updated after the specified time. :type updatedafter: datetime :param minlatitude: Limit to events with a latitude larger than the specified minimum. :type minlatitude: float :param maxlatitude: Limit to events with a latitude smaller than the specified maximum. :type maxlatitude: float :param minlongitude: Limit to events with a longitude larger than the specified minimum. :type minlongitude: float :param maxlongitude: Limit to events with a longitude smaller than the specified maximum. :type maxlongitude: float :param latitude: Specify the latitude to be used for a radius search. :type latitude: float :param longitude: Specify the longitude to be used for a radius search. :type longitude: float :param maxradiuskm: Limit to events within the specified maximum number of kilometers from the geographic point defined by the latitude and longitude parameters. :type maxradiuskm: float :param maxradius: Limit to events within the specified maximum number of degrees from the geographic point defined by the latitude and longitude parameters. :type maxradius: float :param catalog: Limit to events from a specified catalog. :type catalog: str :param contributor: Limit to events contributed by a specified contributor. :type contributor: str :param limit: Limit the results to the specified number of events. NOTE, this will be throttled by this Python API to the supported Web API limit of 20,000. :type limit: int :param maxdepth: Limit to events with depth less than the specified maximum. :type maxdepth: float :param maxmagnitude: Limit to events with a magnitude smaller than the specified maximum. :type maxmagnitude: float :param mindepth: Limit to events with depth more than the specified minimum. :type mindepth: float :param minmagnitude: Limit to events with a magnitude larger than the specified minimum. :type minmagnitude: float :param offset: Return results starting at the event count specified, starting at 1. :type offset: int :param orderby: Order the results. The allowed values are: - time order by origin descending time - time-asc order by origin ascending time - magnitude order by descending magnitude - magnitude-asc order by ascending magnitude :type orderby: str :param alertlevel: Limit to events with a specific PAGER alert level. The allowed values are: - green Limit to events with PAGER alert level "green". - yellow Limit to events with PAGER alert level "yellow". - orange Limit to events with PAGER alert level "orange". - red Limit to events with PAGER alert level "red". :type alertlevel: str :param eventtype: Limit to events of a specific type. NOTE: "earthquake" will filter non-earthquake events. :type eventtype: str :param maxcdi: Maximum value for Maximum Community Determined Intensity reported by DYFI. :type maxcdi: float :param maxgap: Limit to events with no more than this azimuthal gap. :type maxgap: float :param maxmmi: Maximum value for Maximum Modified Mercalli Intensity reported by ShakeMap. :type maxmmi: float :param maxsig: Limit to events with no more than this significance. :type maxsig: float :param mincdi: Minimum value for Maximum Community Determined Intensity reported by DYFI. :type mincdi: float :param minfelt: Limit to events with this many DYFI responses. :type minfelt: int :param mingap: Limit to events with no less than this azimuthal gap. :type mingap: float :param minsig: Limit to events with no less than this significance. :type minsig: float :param producttype: Limit to events that have this type of product associated. Example producttypes: - moment-tensor - focal-mechanism - shakemap - losspager - dyfi :type producttype: str :param productcode: Return the event that is associated with the productcode. The event will be returned even if the productcode is not the preferred code for the event. Example productcodes: - nn00458749 - at00ndf1fr :type productcode: str :param reviewstatus: Limit to events with a specific review status. The different review statuses are: - automatic Limit to events with review status "automatic". - reviewed Limit to events with review status "reviewed". :type reviewstatus: str :param host: Replace default ComCat host (earthquake.usgs.gov) with a custom host. :type host: str :param enable_limit: Enable 20,000 event search limit. Will turn off searching in segments, which is meant to safely avoid that limit. Use only when you are certain your search will be small. :type enable_limit: bool.

get_event_by_id(eventid[, catalog, ...])

Search the ComCat database for an event matching the input event id.

+
+
+

Calculation Utilities

+ + + + + + + + + + + + + + + + + + +

nearest_index(array, value)

Returns the index from array that is less than the value specified.

find_nearest(array, value)

Returns the value from array that is less than the value specified.

func_inverse(x, y, val[, kind])

Returns the value of a function based on interpolation.

discretize(data, bin_edges[, right_continuous])

returns array with len(bin_edges) consisting of the discretized values from each bin.

bin1d_vec(p, bins[, tol, right_continuous])

Efficient implementation of binning routine on 1D Cartesian Grid.

+
+
+

Statistics Utilities

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

sup_dist(cdf1, cdf2)

given two cumulative distribution functions, compute the supremum of the set of absolute distances.

sup_dist_na(data1, data2)

computes the ks statistic for two ecdfs that are not necessarily aligned on the same values.

cumulative_square_diff(cdf1, cdf2)

given two cumulative distribution functions, compute the cumulative sq.

binned_ecdf(x, vals)

returns the statement P(X ≤ x) for val in vals.

ecdf(x)

Compute the ecdf of vector x.

greater_equal_ecdf(x, val[, cdf])

Given val return P(x ≥ val).

less_equal_ecdf(x, val[, cdf])

Given val return P(x ≤ val).

min_or_none(x)

Given an array x, returns the min value.

max_or_none(x)

Given an array x, returns the max value.

get_quantiles(sim_counts, obs_count)

Computes delta1 and delta2 quantile scores from empirical distribution and observation

poisson_log_likelihood(observation, forecast)

Wrapper around scipy to compute the Poisson log-likelihood

poisson_joint_log_likelihood_ndarray(...)

Efficient calculation of joint log-likelihood of grid-based forecast.

poisson_inverse_cdf(random_matrix, lam)

Wrapper around scipy inverse poisson cdf function

+
+
+

Basic types

+ + + + + + +

AdaptiveHistogram([dh, anchor])

Allows us to work with data that need to be discretized and aggregated even though the the global min/max values are not known before hand.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/developer_notes.html b/reference/developer_notes.html new file mode 100644 index 00000000..af8eabbb --- /dev/null +++ b/reference/developer_notes.html @@ -0,0 +1,230 @@ + + + + + + + + + Developer Notes — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Developer Notes

+

Last updated: 25 January 2022

+
+

Creating a new release of pyCSEP

+

These are the steps required to create a new release of pyCSEP. This requires a combination of updates to the repository +and Github. You will need to build the wheels for distribution on PyPI and upload them to GitHub to issue a release. +The final step involves uploading the tar-ball of the release to PyPI. CI tools provided by conda-forge will automatically +bump the version on conda-forge. Note: permissions are required to push new versions to PyPI.

+
+

1. Code changes

+
    +
  1. Bump the version number in _version.py

  2. +
  3. Update codemeta.json

  4. +
  5. Update CHANGELOG.md. Include links to Github pull requests if possible.

  6. +
  7. Update CREDITS.md if required.

  8. +
  9. Update the version in conf.py.

  10. +
  11. Issue a pull request that contains these changes.

  12. +
  13. Merge pull request when all changes are merged into master and versions are correct.

  14. +
+
+
+

2. Creating source distribution

+

Issue these commands from the top-level directory of the project:

+
python setup.py check
+
+
+

If that executes with no warnings or failures build the source distribution using the command:

+
python setup.py sdist
+
+
+

This creates a folder called dist that contains a file called pycsep-X.Y.Z.tar.gz. This is the distribution +that will be uploaded to PyPI, conda-forge, and Github.

+

Upload to PyPI using twine. This requires permissions to push to the PyPI account:

+
twine upload dist/pycsep-X.Y.Z.tar.gz
+
+
+
+
+

3. Create release on Github

+
    +
  1. Create a new release on GitHub. This can be saved as a draft until it’s ready.

  2. +
  3. Copy new updates information from CHANGELOG.md.

  4. +
  5. Upload tar-ball created from setup.py.

  6. +
  7. Publish release.

  8. +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalog_evaluations.calibration_test.html b/reference/generated/csep.core.catalog_evaluations.calibration_test.html new file mode 100644 index 00000000..11e996a6 --- /dev/null +++ b/reference/generated/csep.core.catalog_evaluations.calibration_test.html @@ -0,0 +1,222 @@ + + + + + + + + + csep.core.catalog_evaluations.calibration_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalog_evaluations.calibration_test

+
+
+csep.core.catalog_evaluations.calibration_test(evaluation_results, delta_1=False)[source]
+

Perform the calibration test by computing a Kilmogorov-Smirnov test of the observed quantiles against a uniform +distribution.

+
+
+
Args:

evaluation_results: iterable of evaluation result objects +delta_1 (bool): use delta_1 for quantiles. default false -> use delta_2 quantile score for calibration test

+
+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalog_evaluations.magnitude_test.html b/reference/generated/csep.core.catalog_evaluations.magnitude_test.html new file mode 100644 index 00000000..1f8c7b9a --- /dev/null +++ b/reference/generated/csep.core.catalog_evaluations.magnitude_test.html @@ -0,0 +1,214 @@ + + + + + + + + + csep.core.catalog_evaluations.magnitude_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalog_evaluations.magnitude_test

+
+
+csep.core.catalog_evaluations.magnitude_test(forecast, observed_catalog, verbose=True)[source]
+

Performs magnitude test for catalog-based forecasts

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalog_evaluations.number_test.html b/reference/generated/csep.core.catalog_evaluations.number_test.html new file mode 100644 index 00000000..d3ec1d60 --- /dev/null +++ b/reference/generated/csep.core.catalog_evaluations.number_test.html @@ -0,0 +1,231 @@ + + + + + + + + + csep.core.catalog_evaluations.number_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalog_evaluations.number_test

+
+
+csep.core.catalog_evaluations.number_test(forecast, observed_catalog, verbose=True)[source]
+

Performs the number test on a catalog-based forecast.

+

The number test builds an empirical distribution of the event counts for each data. By default, this +function does not perform any filtering on the catalogs in the forecast or observation. These should be handled +outside of the function.

+
+
Parameters:
+
+
+
Returns:
+

evaluation result

+
+
Return type:
+

evaluation result (csep.models.EvaluationResult)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalog_evaluations.pseudolikelihood_test.html b/reference/generated/csep.core.catalog_evaluations.pseudolikelihood_test.html new file mode 100644 index 00000000..b8bc0072 --- /dev/null +++ b/reference/generated/csep.core.catalog_evaluations.pseudolikelihood_test.html @@ -0,0 +1,225 @@ + + + + + + + + + csep.core.catalog_evaluations.pseudolikelihood_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalog_evaluations.pseudolikelihood_test

+
+
+csep.core.catalog_evaluations.pseudolikelihood_test(forecast, observed_catalog, verbose=True)[source]
+

Performs the spatial pseudolikelihood test for catalog forecasts.

+

Performs the spatial pseudolikelihood test as described by Savran et al., 2020. The tests uses a pseudolikelihood +statistic computed from the expected rates in spatial cells. A pseudolikelihood test based on space-magnitude bins +is in a development mode and does not exist currently.

+
+
Parameters:
+
+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalog_evaluations.spatial_test.html b/reference/generated/csep.core.catalog_evaluations.spatial_test.html new file mode 100644 index 00000000..85e4fff9 --- /dev/null +++ b/reference/generated/csep.core.catalog_evaluations.spatial_test.html @@ -0,0 +1,225 @@ + + + + + + + + + csep.core.catalog_evaluations.spatial_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalog_evaluations.spatial_test

+
+
+csep.core.catalog_evaluations.spatial_test(forecast, observed_catalog, verbose=True)[source]
+

Performs spatial test for catalog-based forecasts.

+
+
Parameters:
+
    +
  • forecast – CatalogForecast

  • +
  • observed_catalog – CSEPCatalog filtered to be consistent with the forecast

  • +
+
+
Returns:
+

CatalogSpatialTestResult

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.AbstractBaseCatalog.html b/reference/generated/csep.core.catalogs.AbstractBaseCatalog.html new file mode 100644 index 00000000..2ded34ce --- /dev/null +++ b/reference/generated/csep.core.catalogs.AbstractBaseCatalog.html @@ -0,0 +1,358 @@ + + + + + + + + + csep.core.catalogs.AbstractBaseCatalog — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.AbstractBaseCatalog

+
+
+class csep.core.catalogs.AbstractBaseCatalog(filename=None, data=None, catalog_id=None, format=None, name=None, region=None, compute_stats=True, filters=None, metadata=None, date_accessed=None)[source]
+

Abstract catalog base class for PyCSEP catalogs. This class should not and cannot be used on its own. This just +provides the interface for implementing custom catalog classes.

+
+
+__init__(filename=None, data=None, catalog_id=None, format=None, name=None, region=None, compute_stats=True, filters=None, metadata=None, date_accessed=None)[source]
+
+
Standard catalog format for CSEP catalogs. Primary event data are stored in structured numpy array. Additional

metadata are available by the event_id in the catalog metadata information.

+
+
+
+
Parameters:
+
    +
  • filename – location of catalog

  • +
  • catalog (numpy.ndarray or eventlist) – catalog data

  • +
  • catalog_id – catalog id number (used for stochastic event set forecasts)

  • +
  • format – identification used for serialization

  • +
  • name – human readable name of catalog

  • +
  • region – spatial and magnitude region

  • +
  • compute_stats – whether statistics should be computed for the catalog

  • +
  • filters (str or list) – filtering operations to apply to the catalog

  • +
  • metadata (dict) – additional information for events

  • +
  • date_accessed (str) – time string when catalog was accessed

  • +
+
+
+
+ +

Methods

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

__init__([filename, data, catalog_id, ...])

Standard catalog format for CSEP catalogs. Primary event data are stored in structured numpy array. Additional

apply_mct(m_main, event_epoch[, mc])

Applies time-dependent magnitude of completeness following a mainshock.

b_positive()

Implements the b-positive indicator from Nicholas van der Elst

filter([statements, in_place])

Filters the catalog based on statements.

filter_spatial([region, update_stats, in_place])

Removes events outside of the region.

from_dataframe(df, **kwargs)

Creates catalog from dataframe.

from_dict(adict, **kwargs)

Creates a class from the dictionary representation of the class state.

get_bbox()

Returns bounding box of all events in the catalog

get_bvalue([mag_bins, return_error])

Estimates the b-value of a catalog from Marzocchi and Sandri (2003).

get_csep_format()

This method should be overwritten for catalog formats that do not adhere to the CSEP ZMAP catalog format.

get_cumulative_number_of_events()

Returns the cumulative number of events in the catalog.

get_datetimes()

Returns datetime object from timestamp representation in catalog

get_depths()

Returns depths of all events in catalog

get_epoch_times()

Returns the datetime of the event as the UTC epoch time (aka unix timestamp)

get_event_ids()

get_latitudes()

Returns latitudes of all events in catalog

get_longitudes()

Returns longitudes of all events in catalog

get_mag_idx()

Return magnitude index from region magnitudes

get_magnitudes()

Returns magnitudes of all events in catalog

get_number_of_events()

Computes the number of events from a catalog by checking its length.

get_spatial_idx()

Return spatial index of region for a longitudes and latitudes in catalog.

length_in_seconds()

Returns catalog length in seconds assuming that the catalog is sorted by time.

load_catalog(filename[, loader])

load_json(filename, **kwargs)

Loads catalog from JSON file

magnitude_counts([mag_bins, tol, retbins])

Computes the count of events within mag_bins

plot([ax, show, extent, set_global, plot_args])

Plot catalog according to plate-carree projection

spatial_counts()

Returns counts of events within discrete spatial region

spatial_event_probability()

spatial_magnitude_counts([mag_bins, tol])

Return counts of events in space-magnitude region.

to_dataframe([with_datetime])

Returns pandas Dataframe describing the catalog.

to_dict()

Serializes class to json dictionary.

update_catalog_stats()

Compute summary statistics of events in catalog

write_ascii(filename[, write_header, ...])

Write catalog in csep2 ascii format.

write_json(filename)

Writes catalog to json file

+

Attributes

+ + + + + + + + + + + + + + + + + + +

catalog

data

dtype

event_count

Number of events in catalog

log

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.apply_mct.html b/reference/generated/csep.core.catalogs.CSEPCatalog.apply_mct.html new file mode 100644 index 00000000..f718f803 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.apply_mct.html @@ -0,0 +1,241 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.apply_mct — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.apply_mct

+
+
+CSEPCatalog.apply_mct(m_main, event_epoch, mc=2.5)
+

Applies time-dependent magnitude of completeness following a mainshock. Taken +from Eq. (15) from Helmstetter et al., 2006.

+
+
Parameters:
+
    +
  • m_main (float) – mainshock magnitude

  • +
  • event_epoch – epoch time in millis of event

  • +
  • mc (float) – mag_completeness

  • +
+
+
+

Returns:

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.event_count.html b/reference/generated/csep.core.catalogs.CSEPCatalog.event_count.html new file mode 100644 index 00000000..2aec5cbe --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.event_count.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.event_count — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.event_count

+
+
+property CSEPCatalog.event_count
+

Number of events in catalog

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.filter.html b/reference/generated/csep.core.catalogs.CSEPCatalog.filter.html new file mode 100644 index 00000000..5db72414 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.filter.html @@ -0,0 +1,246 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.filter — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.filter

+
+
+CSEPCatalog.filter(statements=None, in_place=True)
+

Filters the catalog based on statements. This function takes about 60% of the run-time for processing UCERF3-ETAS +simulations, so likely all other simulations. Implementations should try and limit how often this function +will be called.

+
+
Parameters:
+
    +
  • statements (str, iter) – logical statements to evaluate, e.g., [‘magnitude > 4.0’, ‘year >= 1995’]

  • +
  • in_place (bool) – return new instance of catalog

  • +
+
+
Returns:
+

instance of AbstractBaseCatalog, so that this function can be chained.

+
+
Return type:
+

self

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.filter_spatial.html b/reference/generated/csep.core.catalogs.CSEPCatalog.filter_spatial.html new file mode 100644 index 00000000..c73a4d3e --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.filter_spatial.html @@ -0,0 +1,243 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.filter_spatial — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.filter_spatial

+
+
+CSEPCatalog.filter_spatial(region=None, update_stats=False, in_place=True)
+

Removes events outside of the region. This takes some time and should be used sparingly. Typically for isolating a region +near the mainshock or inside a testing region. This should not be used to create gridded style data sets.

+
+
Parameters:
+
    +
  • region – csep.utils.spatial.Region

  • +
  • update_stats (bool) – if true will update catalog statistics

  • +
  • in_place (bool) – if false, will create a new instance of the catalog preserving state

  • +
+
+
Returns:
+

self

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.from_dataframe.html b/reference/generated/csep.core.catalogs.CSEPCatalog.from_dataframe.html new file mode 100644 index 00000000..3b04fd5d --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.from_dataframe.html @@ -0,0 +1,248 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.from_dataframe — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.from_dataframe

+
+
+classmethod CSEPCatalog.from_dataframe(df, **kwargs)
+

Creates catalog from dataframe. Dataframe must have columns that are equivalent to whatever fields +the catalog expects in the catalog dtype.

+

For example:

+
+

cat = CSEPCatalog() +df = cat.get_dataframe() +new_cat = CSEPCatalog.from_dataframe(df)

+
+
+
Parameters:
+
+
+
Returns:
+

Catalog

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.from_dict.html b/reference/generated/csep.core.catalogs.CSEPCatalog.from_dict.html new file mode 100644 index 00000000..21027050 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.from_dict.html @@ -0,0 +1,232 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.from_dict — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.from_dict

+
+
+classmethod CSEPCatalog.from_dict(adict, **kwargs)
+

Creates a class from the dictionary representation of the class state. The catalog is serialized into a list of +tuples that contain the event information in the order defined by the dtype.

+

This needs to handle reading in region information at some point.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.get_bvalue.html b/reference/generated/csep.core.catalogs.CSEPCatalog.get_bvalue.html new file mode 100644 index 00000000..43d707b5 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.get_bvalue.html @@ -0,0 +1,247 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.get_bvalue — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.get_bvalue

+
+
+CSEPCatalog.get_bvalue(mag_bins=None, return_error=True)
+

Estimates the b-value of a catalog from Marzocchi and Sandri (2003). First, tries to use the magnitude bins +provided to the function. If those are not provided, tries the magnitude bins associated with the region. +If that fails, uses the default magnitude bins provided in constants.

+
+
Parameters:
+
    +
  • mag_bins (list or array_like) – monotonically increasing set of magnitude bin edges

  • +
  • return_error (bool) – returns errors

  • +
+
+
Returns:
+

b-value +err (float): std. err

+
+
Return type:
+

bval (float)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.get_csep_format.html b/reference/generated/csep.core.catalogs.CSEPCatalog.get_csep_format.html new file mode 100644 index 00000000..ca4ef519 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.get_csep_format.html @@ -0,0 +1,236 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.get_csep_format — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.get_csep_format

+
+
+CSEPCatalog.get_csep_format()[source]
+

Returns CSEP format for a catalog

+

This catalog is already in CSEP format so it will return self.

+
+
Returns:
+

self

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events.html b/reference/generated/csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events.html new file mode 100644 index 00000000..5066e918 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events.html @@ -0,0 +1,239 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events

+
+
+CSEPCatalog.get_cumulative_number_of_events()
+

Returns the cumulative number of events in the catalog.

+

Primarily used for plotting purposes.

+
+
Returns:
+

numpy array of the cumulative number of events, empty array if catalog is empty.

+
+
Return type:
+

numpy.array

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.get_datetimes.html b/reference/generated/csep.core.catalogs.CSEPCatalog.get_datetimes.html new file mode 100644 index 00000000..fc284d17 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.get_datetimes.html @@ -0,0 +1,235 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.get_datetimes — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.get_datetimes

+
+
+CSEPCatalog.get_datetimes()
+

Returns datetime object from timestamp representation in catalog

+
+
Returns:
+

list of timestamps from events in catalog.

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.get_depths.html b/reference/generated/csep.core.catalogs.CSEPCatalog.get_depths.html new file mode 100644 index 00000000..3302f45e --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.get_depths.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.get_depths — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.get_depths

+
+
+CSEPCatalog.get_depths()
+

Returns depths of all events in catalog

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.get_epoch_times.html b/reference/generated/csep.core.catalogs.CSEPCatalog.get_epoch_times.html new file mode 100644 index 00000000..067d708b --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.get_epoch_times.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.get_epoch_times — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.get_epoch_times

+
+
+CSEPCatalog.get_epoch_times()
+

Returns the datetime of the event as the UTC epoch time (aka unix timestamp)

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.get_latitudes.html b/reference/generated/csep.core.catalogs.CSEPCatalog.get_latitudes.html new file mode 100644 index 00000000..dbb79f59 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.get_latitudes.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.get_latitudes — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.get_latitudes

+
+
+CSEPCatalog.get_latitudes()
+

Returns latitudes of all events in catalog

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.get_longitudes.html b/reference/generated/csep.core.catalogs.CSEPCatalog.get_longitudes.html new file mode 100644 index 00000000..363ff2bc --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.get_longitudes.html @@ -0,0 +1,238 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.get_longitudes — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.get_longitudes

+
+
+CSEPCatalog.get_longitudes()
+

Returns longitudes of all events in catalog

+
+
Returns:
+

longitudes

+
+
Return type:
+

(numpy.array)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.get_magnitudes.html b/reference/generated/csep.core.catalogs.CSEPCatalog.get_magnitudes.html new file mode 100644 index 00000000..282e0dfe --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.get_magnitudes.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.get_magnitudes — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.get_magnitudes

+
+
+CSEPCatalog.get_magnitudes()
+

Returns magnitudes of all events in catalog

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.html b/reference/generated/csep.core.catalogs.CSEPCatalog.html new file mode 100644 index 00000000..4f0fafdb --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.html @@ -0,0 +1,360 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog

+
+
+class csep.core.catalogs.CSEPCatalog(**kwargs)[source]
+

Standard catalog class for PyCSEP catalog operations.

+
+
+__init__(**kwargs)[source]
+
+
Standard catalog format for CSEP catalogs. Primary event data are stored in structured numpy array. Additional

metadata are available by the event_id in the catalog metadata information.

+
+
+
+
Parameters:
+
    +
  • filename – location of catalog

  • +
  • catalog (numpy.ndarray or eventlist) – catalog data

  • +
  • catalog_id – catalog id number (used for stochastic event set forecasts)

  • +
  • format – identification used for serialization

  • +
  • name – human readable name of catalog

  • +
  • region – spatial and magnitude region

  • +
  • compute_stats – whether statistics should be computed for the catalog

  • +
  • filters (str or list) – filtering operations to apply to the catalog

  • +
  • metadata (dict) – additional information for events

  • +
  • date_accessed (str) – time string when catalog was accessed

  • +
+
+
+
+ +

Methods

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

__init__(**kwargs)

Standard catalog format for CSEP catalogs. Primary event data are stored in structured numpy array. Additional

apply_mct(m_main, event_epoch[, mc])

Applies time-dependent magnitude of completeness following a mainshock.

b_positive()

Implements the b-positive indicator from Nicholas van der Elst

filter([statements, in_place])

Filters the catalog based on statements.

filter_spatial([region, update_stats, in_place])

Removes events outside of the region.

from_dataframe(df, **kwargs)

Creates catalog from dataframe.

from_dict(adict, **kwargs)

Creates a class from the dictionary representation of the class state.

get_bbox()

Returns bounding box of all events in the catalog

get_bvalue([mag_bins, return_error])

Estimates the b-value of a catalog from Marzocchi and Sandri (2003).

get_csep_format()

Returns CSEP format for a catalog

get_cumulative_number_of_events()

Returns the cumulative number of events in the catalog.

get_datetimes()

Returns datetime object from timestamp representation in catalog

get_depths()

Returns depths of all events in catalog

get_epoch_times()

Returns the datetime of the event as the UTC epoch time (aka unix timestamp)

get_event_ids()

get_latitudes()

Returns latitudes of all events in catalog

get_longitudes()

Returns longitudes of all events in catalog

get_mag_idx()

Return magnitude index from region magnitudes

get_magnitudes()

Returns magnitudes of all events in catalog

get_number_of_events()

Computes the number of events from a catalog by checking its length.

get_spatial_idx()

Return spatial index of region for a longitudes and latitudes in catalog.

length_in_seconds()

Returns catalog length in seconds assuming that the catalog is sorted by time.

load_ascii_catalogs(filename, **kwargs)

Loads multiple catalogs in csep-ascii format.

load_catalog(filename[, loader])

Loads catalog stored in CSEP1 ascii format

load_json(filename, **kwargs)

Loads catalog from JSON file

magnitude_counts([mag_bins, tol, retbins])

Computes the count of events within mag_bins

plot([ax, show, extent, set_global, plot_args])

Plot catalog according to plate-carree projection

spatial_counts()

Returns counts of events within discrete spatial region

spatial_event_probability()

spatial_magnitude_counts([mag_bins, tol])

Return counts of events in space-magnitude region.

to_dataframe([with_datetime])

Returns pandas Dataframe describing the catalog.

to_dict()

Serializes class to json dictionary.

update_catalog_stats()

Compute summary statistics of events in catalog

write_ascii(filename[, write_header, ...])

Write catalog in csep2 ascii format.

write_json(filename)

Writes catalog to json file

+

Attributes

+ + + + + + + + + + + + + + + + + + +

catalog

data

dtype

event_count

Number of events in catalog

log

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.length_in_seconds.html b/reference/generated/csep.core.catalogs.CSEPCatalog.length_in_seconds.html new file mode 100644 index 00000000..5318174a --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.length_in_seconds.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.length_in_seconds — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.length_in_seconds

+
+
+CSEPCatalog.length_in_seconds()
+

Returns catalog length in seconds assuming that the catalog is sorted by time.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.load_ascii_catalogs.html b/reference/generated/csep.core.catalogs.CSEPCatalog.load_ascii_catalogs.html new file mode 100644 index 00000000..aaf829b8 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.load_ascii_catalogs.html @@ -0,0 +1,243 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.load_ascii_catalogs — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.load_ascii_catalogs

+
+
+classmethod CSEPCatalog.load_ascii_catalogs(filename, **kwargs)[source]
+

Loads multiple catalogs in csep-ascii format.

+

This function can load multiple catalogs stored in a single file. This typically called to +load a catalog-based forecast, but could also load a collection of catalogs stored in the same file

+
+
Parameters:
+
    +
  • filename (str) – filepath or directory of catalog files

  • +
  • **kwargs (dict) – passed to class constructor

  • +
+
+
Returns:
+

yields CSEPCatalog class

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.load_catalog.html b/reference/generated/csep.core.catalogs.CSEPCatalog.load_catalog.html new file mode 100644 index 00000000..ecec2266 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.load_catalog.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.load_catalog — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.load_catalog

+
+
+classmethod CSEPCatalog.load_catalog(filename, loader=<function csep_ascii>, **kwargs)[source]
+

Loads catalog stored in CSEP1 ascii format

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.load_json.html b/reference/generated/csep.core.catalogs.CSEPCatalog.load_json.html new file mode 100644 index 00000000..2c050665 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.load_json.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.load_json — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.load_json

+
+
+classmethod CSEPCatalog.load_json(filename, **kwargs)
+

Loads catalog from JSON file

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.magnitude_counts.html b/reference/generated/csep.core.catalogs.CSEPCatalog.magnitude_counts.html new file mode 100644 index 00000000..59b703a6 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.magnitude_counts.html @@ -0,0 +1,244 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.magnitude_counts — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.magnitude_counts

+
+
+CSEPCatalog.magnitude_counts(mag_bins=None, tol=1e-05, retbins=False)
+

Computes the count of events within mag_bins

+
+
Parameters:
+
    +
  • mag_bins – uses csep.utils.constants.CSEP_MW_BINS as default magnitude bins

  • +
  • retbins (bool) – if this is true, return the bins used

  • +
+
+
Returns:
+

showing the counts of hte events in each magnitude bin

+
+
Return type:
+

numpy.ndarray

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.plot.html b/reference/generated/csep.core.catalogs.CSEPCatalog.plot.html new file mode 100644 index 00000000..5b5efd23 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.plot.html @@ -0,0 +1,246 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.plot — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.plot

+
+
+CSEPCatalog.plot(ax=None, show=False, extent=None, set_global=False, plot_args=None)
+

Plot catalog according to plate-carree projection

+
+
Parameters:
+
    +
  • ax (matplotlib.pyplot.axes) – Previous axes onto which catalog can be drawn

  • +
  • show (bool) – if true, show the figure. this call is blocking.

  • +
  • extent (list) – Force an extent [lon_min, lon_max, lat_min, lat_max]

  • +
  • plot_args (optional/dict) – dictionary containing plotting arguments for making figures

  • +
+
+
Returns:
+

matplotlib.Axes.axes

+
+
Return type:
+

axes

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_counts.html b/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_counts.html new file mode 100644 index 00000000..b7119422 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_counts.html @@ -0,0 +1,237 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.spatial_counts — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.spatial_counts

+
+
+CSEPCatalog.spatial_counts()
+

Returns counts of events within discrete spatial region

+

We figure out the index of the polygons and create a map that relates the spatial coordinate in the +Cartesian grid with with the polygon in region.

+
+
Returns:
+

ndarray containing the event count in each spatial bin

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts.html b/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts.html new file mode 100644 index 00000000..d1c0e713 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts.html @@ -0,0 +1,246 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts

+
+
+CSEPCatalog.spatial_magnitude_counts(mag_bins=None, tol=1e-05)
+

Return counts of events in space-magnitude region.

+

We figure out the index of the polygons and create a map that relates the spatial coordinate in the +Cartesian grid with the polygon in region.

+
+
Parameters:
+
    +
  • mag_bins (list, numpy.array) – magnitude bins (optional), if empty tries to use magnitude bins associated with region

  • +
  • tol (float) – tolerance for comparisons within magnitude bins

  • +
+
+
Returns:
+

unnormalized event count in each bin, 1d ndarray where index corresponds to midpoints

+
+
Return type:
+

output

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.to_dataframe.html b/reference/generated/csep.core.catalogs.CSEPCatalog.to_dataframe.html new file mode 100644 index 00000000..017aafe2 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.to_dataframe.html @@ -0,0 +1,247 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.to_dataframe — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.to_dataframe

+
+
+CSEPCatalog.to_dataframe(with_datetime=False)
+

Returns pandas Dataframe describing the catalog. Explicitly casts to pandas DataFrame.

+
+

Note

+

The dataframe will be in the format of the original catalog. If you require that the +dataframe be in the CSEP ZMAP format, you must explicitly convert the catalog.

+
+
+
Returns:
+

This function must return a pandas DataFrame

+
+
Return type:
+

(pandas.DataFrame)

+
+
Raises:
+

ValueError – If self._catalog cannot be passed to pandas.DataFrame constructor, this function + must be overridden in the child class.

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.to_dict.html b/reference/generated/csep.core.catalogs.CSEPCatalog.to_dict.html new file mode 100644 index 00000000..abbb8c50 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.to_dict.html @@ -0,0 +1,235 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.to_dict — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.to_dict

+
+
+CSEPCatalog.to_dict()
+

Serializes class to json dictionary.

+
+
Returns:
+

catalog as dict

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.update_catalog_stats.html b/reference/generated/csep.core.catalogs.CSEPCatalog.update_catalog_stats.html new file mode 100644 index 00000000..37d9ac2b --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.update_catalog_stats.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.update_catalog_stats — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.update_catalog_stats

+
+
+CSEPCatalog.update_catalog_stats()
+

Compute summary statistics of events in catalog

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.write_ascii.html b/reference/generated/csep.core.catalogs.CSEPCatalog.write_ascii.html new file mode 100644 index 00000000..5686af40 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.write_ascii.html @@ -0,0 +1,252 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.write_ascii — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.write_ascii

+
+
+CSEPCatalog.write_ascii(filename, write_header=True, write_empty=True, append=False, id_col='id')
+

Write catalog in csep2 ascii format.

+

This format only uses the required variables from the catalog and should work by default. It can be overwritten +if an event_id (or other columns should be used). By default, the routine will look for a column the catalog array +called ‘id’ and will populate the event_id column with these values. If the ‘id’ column is not found, then it will +leave this column blank

+
+
Short format description (comma separated values):

longitude, latitude, M, time_string format=”%Y-%m-%dT%H:%M:%S.%f”, depth, catalog_id, [event_id]

+
+
+
+
Parameters:
+
    +
  • filename (str) – the file location to write the the ascii catalog file

  • +
  • write_header (bool) – Write header string (default true)

  • +
  • write_empty (bool) – Write file event if no events in catalog

  • +
  • append (bool) – If true, append to the filename

  • +
  • id_col (str) – name of event_id column (if included)

  • +
+
+
Returns:
+

NoneType

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.CSEPCatalog.write_json.html b/reference/generated/csep.core.catalogs.CSEPCatalog.write_json.html new file mode 100644 index 00000000..8105b385 --- /dev/null +++ b/reference/generated/csep.core.catalogs.CSEPCatalog.write_json.html @@ -0,0 +1,235 @@ + + + + + + + + + csep.core.catalogs.CSEPCatalog.write_json — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.CSEPCatalog.write_json

+
+
+CSEPCatalog.write_json(filename)
+

Writes catalog to json file

+
+
Parameters:
+

filename (str) – path to save file

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.catalogs.UCERF3Catalog.html b/reference/generated/csep.core.catalogs.UCERF3Catalog.html new file mode 100644 index 00000000..c5b68d89 --- /dev/null +++ b/reference/generated/csep.core.catalogs.UCERF3Catalog.html @@ -0,0 +1,371 @@ + + + + + + + + + csep.core.catalogs.UCERF3Catalog — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.catalogs.UCERF3Catalog

+
+
+class csep.core.catalogs.UCERF3Catalog(**kwargs)[source]
+

Catalog written from UCERF3-ETAS binary format

+
+
Variables:
+
    +
  • header_dtype – numpy.dtype description of synthetic catalog header.

  • +
  • event_dtype – numpy.dtype description of ucerf3 catalog format

  • +
+
+
+
+
+__init__(**kwargs)[source]
+
+
Standard catalog format for CSEP catalogs. Primary event data are stored in structured numpy array. Additional

metadata are available by the event_id in the catalog metadata information.

+
+
+
+
Parameters:
+
    +
  • filename – location of catalog

  • +
  • catalog (numpy.ndarray or eventlist) – catalog data

  • +
  • catalog_id – catalog id number (used for stochastic event set forecasts)

  • +
  • format – identification used for serialization

  • +
  • name – human readable name of catalog

  • +
  • region – spatial and magnitude region

  • +
  • compute_stats – whether statistics should be computed for the catalog

  • +
  • filters (str or list) – filtering operations to apply to the catalog

  • +
  • metadata (dict) – additional information for events

  • +
  • date_accessed (str) – time string when catalog was accessed

  • +
+
+
+
+ +

Methods

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

__init__(**kwargs)

Standard catalog format for CSEP catalogs. Primary event data are stored in structured numpy array. Additional

apply_mct(m_main, event_epoch[, mc])

Applies time-dependent magnitude of completeness following a mainshock.

b_positive()

Implements the b-positive indicator from Nicholas van der Elst

filter([statements, in_place])

Filters the catalog based on statements.

filter_spatial([region, update_stats, in_place])

Removes events outside of the region.

from_dataframe(df, **kwargs)

Creates catalog from dataframe.

from_dict(adict, **kwargs)

Creates a class from the dictionary representation of the class state.

get_bbox()

Returns bounding box of all events in the catalog

get_bvalue([mag_bins, return_error])

Estimates the b-value of a catalog from Marzocchi and Sandri (2003).

get_csep_format()

This method should be overwritten for catalog formats that do not adhere to the CSEP ZMAP catalog format.

get_cumulative_number_of_events()

Returns the cumulative number of events in the catalog.

get_datetimes()

Returns datetime object from timestamp representation in catalog

get_depths()

Returns depths of all events in catalog

get_epoch_times()

Returns the datetime of the event as the UTC epoch time (aka unix timestamp)

get_event_ids()

get_latitudes()

Returns latitudes of all events in catalog

get_longitudes()

Returns longitudes of all events in catalog

get_mag_idx()

Return magnitude index from region magnitudes

get_magnitudes()

Returns magnitudes of all events in catalog

get_number_of_events()

Computes the number of events from a catalog by checking its length.

get_spatial_idx()

Return spatial index of region for a longitudes and latitudes in catalog.

length_in_seconds()

Returns catalog length in seconds assuming that the catalog is sorted by time.

load_catalog(filename[, loader])

load_catalogs(filename, **kwargs)

Loads catalogs based on the merged binary file format of UCERF3.

load_json(filename, **kwargs)

Loads catalog from JSON file

magnitude_counts([mag_bins, tol, retbins])

Computes the count of events within mag_bins

plot([ax, show, extent, set_global, plot_args])

Plot catalog according to plate-carree projection

spatial_counts()

Returns counts of events within discrete spatial region

spatial_event_probability()

spatial_magnitude_counts([mag_bins, tol])

Return counts of events in space-magnitude region.

to_dataframe([with_datetime])

Returns pandas Dataframe describing the catalog.

to_dict()

Serializes class to json dictionary.

update_catalog_stats()

Compute summary statistics of events in catalog

write_ascii(filename[, write_header, ...])

Write catalog in csep2 ascii format.

write_json(filename)

Writes catalog to json file

+

Attributes

+ + + + + + + + + + + + + + + + + + + + + +

catalog

data

dtype

event_count

Number of events in catalog

header_dtype

log

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.CatalogForecast.get_dataframe.html b/reference/generated/csep.core.forecasts.CatalogForecast.get_dataframe.html new file mode 100644 index 00000000..102d08d9 --- /dev/null +++ b/reference/generated/csep.core.forecasts.CatalogForecast.get_dataframe.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.CatalogForecast.get_dataframe — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.CatalogForecast.get_dataframe

+
+
+CatalogForecast.get_dataframe()[source]
+

Return a single dataframe with all of the events from all of the catalogs.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.CatalogForecast.get_expected_rates.html b/reference/generated/csep.core.forecasts.CatalogForecast.get_expected_rates.html new file mode 100644 index 00000000..6d67c94b --- /dev/null +++ b/reference/generated/csep.core.forecasts.CatalogForecast.get_expected_rates.html @@ -0,0 +1,243 @@ + + + + + + + + + csep.core.forecasts.CatalogForecast.get_expected_rates — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.CatalogForecast.get_expected_rates

+
+
+CatalogForecast.get_expected_rates(verbose=False)[source]
+

Compute the expected rates in space-magnitude bins

+
+
Parameters:
+
    +
  • catalogs_iterable (iterable) – collection of catalogs, should be filtered outside the function

  • +
  • data (csep.core.AbstractBaseCatalog) – observation data

  • +
+
+
Returns:
+

csep.core.forecasts.GriddedForecast +list of tuple(lon, lat, magnitude) events that were skipped in binning. if data was filtered in space +and magnitude beforehand this list shoudl be empty.

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.CatalogForecast.html b/reference/generated/csep.core.forecasts.CatalogForecast.html new file mode 100644 index 00000000..46e3847a --- /dev/null +++ b/reference/generated/csep.core.forecasts.CatalogForecast.html @@ -0,0 +1,315 @@ + + + + + + + + + csep.core.forecasts.CatalogForecast — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.CatalogForecast

+
+
+class csep.core.forecasts.CatalogForecast(filename=None, catalogs=None, name=None, filter_spatial=False, filters=None, apply_mct=False, region=None, expected_rates=None, start_time=None, end_time=None, n_cat=None, event=None, loader=None, catalog_type='ascii', catalog_format='native', store=True, apply_filters=False)[source]
+

Catalog based forecast defined as a family of stochastic event sets.

+
+
+__init__(filename=None, catalogs=None, name=None, filter_spatial=False, filters=None, apply_mct=False, region=None, expected_rates=None, start_time=None, end_time=None, n_cat=None, event=None, loader=None, catalog_type='ascii', catalog_format='native', store=True, apply_filters=False)[source]
+

The region information can be provided along side the data, if they are stored in one of the supported file formats. +It is assumed that the region for each data is identical. If the regions are not provided with the data files, +they must be provided explicitly. The california testing region can be loaded using csep.utils.spatial.california_relm_region().

+

There are a few different ways this class can be constructed, each

+

The region is not required to load a forecast or to perform basic operations on a forecast, such as counting events. +Any binning of events in space or magnitude will require a spatial region or magnitude bin definitions, respectively.

+
+
Parameters:
+
    +
  • filename (str) – Path to the file or directory containing the forecast.

  • +
  • catalogs – iterable of csep.core.catalogs.AbstractBaseCatalog

  • +
  • filter_spatial (bool) – if true, will filter to area defined in space region

  • +
  • apply_mct (bool) – this should be provided if a time-dependent magnitude completeness model should be +applied to the forecast

  • +
  • filters (iterable) – list of data filter strings. these override the filter_magnitude and filter_time arguments

  • +
  • regioncsep.core.spatial.CartesianGrid2D including magnitude bins

  • +
  • start_time (datetime.datetime) – start time of the forecast

  • +
  • end_time (datetime.datetime) – end time of the forecast

  • +
  • name (str) – name of the forecast, will be used for defaults in plotting and other places

  • +
  • n_cat (int) – number of catalogs in the forecast

  • +
  • event (csep.models.Event) – if the forecast is associated with a particular event

  • +
  • store (bool) – if true, will store catalogs on object in memory. this should only be made false if working +with very large forecast files that cannot be stored in memory

  • +
  • apply_filters (bool) – if true, filters will be applied automatically to the catalogs as the forecast +is iterated through

  • +
+
+
+
+ +

Methods

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

__init__([filename, catalogs, name, ...])

The region information can be provided along side the data, if they are stored in one of the supported file formats.

get_dataframe()

Return a single dataframe with all of the events from all of the catalogs.

get_event_counts([verbose])

Returns a numpy array containing the number of event counts for each catalog.

get_expected_rates([verbose])

Compute the expected rates in space-magnitude bins

load_ascii(fname, **kwargs)

Loads ASCII format for data forecast.

magnitude_counts()

Returns expected magnitude counts from forecast

plot([plot_args, verbose])

spatial_counts([cartesian])

Returns the expected spatial counts from forecast

write_ascii(fname[, header, loader])

Writes data forecast to ASCII format

+

Attributes

+ + + + + + + + + + + + + + + + + + +

end_epoch

log

magnitudes

Returns left bin-edges of magnitude bins

min_magnitude

Returns smallest magnitude bin edge of forecast

start_epoch

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.CatalogForecast.load_ascii.html b/reference/generated/csep.core.forecasts.CatalogForecast.load_ascii.html new file mode 100644 index 00000000..da70beb8 --- /dev/null +++ b/reference/generated/csep.core.forecasts.CatalogForecast.load_ascii.html @@ -0,0 +1,238 @@ + + + + + + + + + csep.core.forecasts.CatalogForecast.load_ascii — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.CatalogForecast.load_ascii

+
+
+classmethod CatalogForecast.load_ascii(fname, **kwargs)[source]
+

Loads ASCII format for data forecast.

+
+
Parameters:
+

fname (str) – path to file or directory containing forecast files

+
+
Returns:
+

class:`csep.core.forecasts.CatalogForecast

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.CatalogForecast.magnitude_counts.html b/reference/generated/csep.core.forecasts.CatalogForecast.magnitude_counts.html new file mode 100644 index 00000000..c4041c43 --- /dev/null +++ b/reference/generated/csep.core.forecasts.CatalogForecast.magnitude_counts.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.CatalogForecast.magnitude_counts — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.CatalogForecast.magnitude_counts

+
+
+CatalogForecast.magnitude_counts()[source]
+

Returns expected magnitude counts from forecast

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.CatalogForecast.magnitudes.html b/reference/generated/csep.core.forecasts.CatalogForecast.magnitudes.html new file mode 100644 index 00000000..dcfe43d7 --- /dev/null +++ b/reference/generated/csep.core.forecasts.CatalogForecast.magnitudes.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.CatalogForecast.magnitudes — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.CatalogForecast.magnitudes

+
+
+property CatalogForecast.magnitudes
+

Returns left bin-edges of magnitude bins

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.CatalogForecast.min_magnitude.html b/reference/generated/csep.core.forecasts.CatalogForecast.min_magnitude.html new file mode 100644 index 00000000..4b0dff16 --- /dev/null +++ b/reference/generated/csep.core.forecasts.CatalogForecast.min_magnitude.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.CatalogForecast.min_magnitude — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.CatalogForecast.min_magnitude

+
+
+property CatalogForecast.min_magnitude
+

Returns smallest magnitude bin edge of forecast

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.CatalogForecast.spatial_counts.html b/reference/generated/csep.core.forecasts.CatalogForecast.spatial_counts.html new file mode 100644 index 00000000..06817561 --- /dev/null +++ b/reference/generated/csep.core.forecasts.CatalogForecast.spatial_counts.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.CatalogForecast.spatial_counts — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.CatalogForecast.spatial_counts

+
+
+CatalogForecast.spatial_counts(cartesian=False)[source]
+

Returns the expected spatial counts from forecast

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.CatalogForecast.write_ascii.html b/reference/generated/csep.core.forecasts.CatalogForecast.write_ascii.html new file mode 100644 index 00000000..73661284 --- /dev/null +++ b/reference/generated/csep.core.forecasts.CatalogForecast.write_ascii.html @@ -0,0 +1,241 @@ + + + + + + + + + csep.core.forecasts.CatalogForecast.write_ascii — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.CatalogForecast.write_ascii

+
+
+CatalogForecast.write_ascii(fname, header=True, loader=None)[source]
+

Writes data forecast to ASCII format

+
+
Parameters:
+
    +
  • fname (str) – Output filename of forecast

  • +
  • header (bool) – If true, write header information; else, do not write header.

  • +
+
+
Returns:
+

NoneType

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.data.html b/reference/generated/csep.core.forecasts.GriddedForecast.data.html new file mode 100644 index 00000000..0b565926 --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.data.html @@ -0,0 +1,233 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.data — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.data

+
+
+property GriddedForecast.data
+

Contains the spatio-magnitude forecast as 2d numpy.ndarray.

+

The dimensions of this array are (num_spatial_bins, num_magnitude_bins). The spatial bins can be indexed through +a look up table as part of the region class. The magnitude bins used are stored as directly as an attribute of +class.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.event_count.html b/reference/generated/csep.core.forecasts.GriddedForecast.event_count.html new file mode 100644 index 00000000..80d6ee39 --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.event_count.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.event_count — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.event_count

+
+
+property GriddedForecast.event_count
+

Returns a sum of the forecast data

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.from_custom.html b/reference/generated/csep.core.forecasts.GriddedForecast.from_custom.html new file mode 100644 index 00000000..6a37c82f --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.from_custom.html @@ -0,0 +1,252 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.from_custom — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.from_custom

+
+
+classmethod GriddedForecast.from_custom(func, func_args=(), **kwargs)[source]
+

Creates MarkedGriddedDataSet class from custom parsing function.

+
+
Parameters:
+
    +
  • func (callable) – function will be called as func(*func_args).

  • +
  • func_args (tuple) – arguments to pass to func

  • +
  • **kwargs – keyword arguments to pass to the GriddedForecast class constructor.

  • +
+
+
Returns:
+

forecast object

+
+
Return type:
+

csep.core.forecasts.GriddedForecast

+
+
+
+

Note

+

The loader function func needs to return a tuple that contains (data, region, magnitudes). data is a +numpy.ndarray, region is a CartesianGrid2D, and +magnitudes are a numpy.ndarray consisting of the magnitude bin edges. See the function +load_ascii for an example.

+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.get_index_of.html b/reference/generated/csep.core.forecasts.GriddedForecast.get_index_of.html new file mode 100644 index 00000000..1b9b4c18 --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.get_index_of.html @@ -0,0 +1,248 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.get_index_of — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.get_index_of

+
+
+GriddedForecast.get_index_of(lons, lats)
+

Returns the index of lons, lats in spatial region

+

See csep.utils.spatial.CartesianGrid2D for more details.

+
+
Parameters:
+
    +
  • lons – ndarray-like

  • +
  • lats – ndarray-like

  • +
+
+
Returns:
+

ndarray-like

+
+
Return type:
+

idx

+
+
Raises:
+

ValueError – if lons or lats are outside of the region.

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.get_latitudes.html b/reference/generated/csep.core.forecasts.GriddedForecast.get_latitudes.html new file mode 100644 index 00000000..9f2c06e4 --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.get_latitudes.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.get_latitudes — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.get_latitudes

+
+
+GriddedForecast.get_latitudes()
+

Returns the latitude of the lower left node of the spatial grid

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.get_longitudes.html b/reference/generated/csep.core.forecasts.GriddedForecast.get_longitudes.html new file mode 100644 index 00000000..6f1f86b5 --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.get_longitudes.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.get_longitudes — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.get_longitudes

+
+
+GriddedForecast.get_longitudes()
+

Returns the lognitude of the lower left node of the spatial grid

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitude_index.html b/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitude_index.html new file mode 100644 index 00000000..d6691f1d --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitude_index.html @@ -0,0 +1,245 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.get_magnitude_index — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.get_magnitude_index

+
+
+GriddedForecast.get_magnitude_index(mags, tol=1e-05)
+

Returns the indices into the magnitude bins of selected magnitudes

+

Note: the right-most bin is treated as extending to infinity.

+
+
Parameters:
+

mags (array-like) – list of magnitudes

+
+
Returns:
+

indices corresponding to mags

+
+
Return type:
+

idm (array-like)

+
+
Raises:
+

ValueError

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitudes.html b/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitudes.html new file mode 100644 index 00000000..5fa6f68e --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.get_magnitudes.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.get_magnitudes — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.get_magnitudes

+
+
+GriddedForecast.get_magnitudes()
+

Returns the left edge of the magnitude bins.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.get_rates.html b/reference/generated/csep.core.forecasts.GriddedForecast.get_rates.html new file mode 100644 index 00000000..22a2b5ba --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.get_rates.html @@ -0,0 +1,246 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.get_rates — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.get_rates

+
+
+GriddedForecast.get_rates(lons, lats, mags, data=None, ret_inds=False)[source]
+

Returns the rate associated with a longitude, latitude, and magnitude.

+
+
Parameters:
+
    +
  • lon – longitude of interest

  • +
  • lat – latitude of interest

  • +
  • mag – magnitude of interest

  • +
  • data – optional, if not none then use this data value provided with the forecast

  • +
+
+
Returns:
+

rates (float or ndarray)

+
+
Raises:
+

RuntimeError – lons lats and mags must be the same length

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.html b/reference/generated/csep.core.forecasts.GriddedForecast.html new file mode 100644 index 00000000..f112d09c --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.html @@ -0,0 +1,335 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast

+
+
+class csep.core.forecasts.GriddedForecast(start_time=None, end_time=None, *args, **kwargs)[source]
+

Class to represent grid-based forecasts

+
+
+__init__(start_time=None, end_time=None, *args, **kwargs)[source]
+

Constructor for GriddedForecast class

+
+
Parameters:
+
+
+
+
+ +

Methods

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

__init__([start_time, end_time])

Constructor for GriddedForecast class

from_custom(func[, func_args])

Creates MarkedGriddedDataSet class from custom parsing function.

from_dict(adict)

get_index_of(lons, lats)

Returns the index of lons, lats in spatial region

get_latitudes()

Returns the latitude of the lower left node of the spatial grid

get_longitudes()

Returns the lognitude of the lower left node of the spatial grid

get_magnitude_index(mags[, tol])

Returns the indices into the magnitude bins of selected magnitudes

get_magnitudes()

Returns the left edge of the magnitude bins.

get_rates(lons, lats, mags[, data, ret_inds])

Returns the rate associated with a longitude, latitude, and magnitude.

get_valid_midpoints()

Returns the midpoints of the valid testing region

load_ascii(ascii_fname[, start_date, ...])

Reads Forecast file from CSEP1 ascii format.

magnitude_counts()

Returns counts of events in magnitude bins

plot([ax, show, log, extent, set_global, ...])

Plot gridded forecast according to plate-carree projection

scale(val)

Scales forecast by floating point value.

scale_to_test_date(test_datetime)

Scales forecast data by the fraction of the date.

spatial_counts([cartesian])

Integrates over magnitudes to return the spatial version of the forecast.

sum()

Sums over all of the forecast data

target_event_rates(target_catalog[, scale])

Generates data set of target event rates given a target data.

to_dict()

+

Attributes

+ + + + + + + + + + + + + + + + + + + + + + + + + + + +

data

Contains the spatio-magnitude forecast as 2d numpy.ndarray.

event_count

Returns a sum of the forecast data

log

magnitudes

min_magnitude

Returns the lowest magnitude bin edge

num_mag_bins

num_nodes

polygons

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.load_ascii.html b/reference/generated/csep.core.forecasts.GriddedForecast.load_ascii.html new file mode 100644 index 00000000..88736a2c --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.load_ascii.html @@ -0,0 +1,243 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.load_ascii — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.load_ascii

+
+
+classmethod GriddedForecast.load_ascii(ascii_fname, start_date=None, end_date=None, name=None, swap_latlon=False)[source]
+

Reads Forecast file from CSEP1 ascii format.

+

The ascii format from CSEP1 testing centers. The ASCII format does not contain headers. The format is listed here:

+

Lon_0, Lon_1, Lat_0, Lat_1, z_0, z_1, Mag_0, Mag_1, Rate, Flag

+

For the purposes of defining region objects and magnitude bins use the Lat_0 and Lon_0 values along with Mag_0. +We can assume that the magnitude bins are regularly spaced to allow us to compute Deltas.

+

The file is row-ordered so that magnitude bins are fastest then followed by space.

+
+
Parameters:
+
    +
  • ascii_fname – file name of csep forecast in .dat format

  • +
  • swap_latlon (bool) – if true, read forecast spatial cells as lat_0, lat_1, lon_0, lon_1

  • +
+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.magnitude_counts.html b/reference/generated/csep.core.forecasts.GriddedForecast.magnitude_counts.html new file mode 100644 index 00000000..51d81b7f --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.magnitude_counts.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.magnitude_counts — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.magnitude_counts

+
+
+GriddedForecast.magnitude_counts()
+

Returns counts of events in magnitude bins

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.magnitudes.html b/reference/generated/csep.core.forecasts.GriddedForecast.magnitudes.html new file mode 100644 index 00000000..2e43bdc8 --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.magnitudes.html @@ -0,0 +1,229 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.magnitudes — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.magnitudes

+
+
+property GriddedForecast.magnitudes
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.min_magnitude.html b/reference/generated/csep.core.forecasts.GriddedForecast.min_magnitude.html new file mode 100644 index 00000000..646605b0 --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.min_magnitude.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.min_magnitude — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.min_magnitude

+
+
+property GriddedForecast.min_magnitude
+

Returns the lowest magnitude bin edge

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.plot.html b/reference/generated/csep.core.forecasts.GriddedForecast.plot.html new file mode 100644 index 00000000..e7e12cf8 --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.plot.html @@ -0,0 +1,244 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.plot — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.plot

+
+
+GriddedForecast.plot(ax=None, show=False, log=True, extent=None, set_global=False, plot_args=None)[source]
+

Plot gridded forecast according to plate-carree projection

+
+
Parameters:
+
    +
  • show (bool) – if true, show the figure. this call is blocking.

  • +
  • plot_args (optional/dict) – dictionary containing plotting arguments for making figures

  • +
+
+
Returns:
+

matplotlib.Axes.axes

+
+
Return type:
+

axes

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.scale_to_test_date.html b/reference/generated/csep.core.forecasts.GriddedForecast.scale_to_test_date.html new file mode 100644 index 00000000..d41bcaaa --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.scale_to_test_date.html @@ -0,0 +1,243 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.scale_to_test_date — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.scale_to_test_date

+
+
+GriddedForecast.scale_to_test_date(test_datetime)[source]
+

Scales forecast data by the fraction of the date.

+

Uses the concept of decimal years to keep track of leap years. See the csep.utils.time_utils.decimal_year for +details on the implementation. If datetime is before the start_date or after the end_date, we will scale the +forecast by unity.

+

These datetime objects can be timezone aware in UTC timezone or both not time aware. This function will raise a +TypeError according to the specifications of datetime module if these conditions are not met.

+
+
Parameters:
+
    +
  • test_datetime (datetime.datetime) – date to scale the forecast

  • +
  • in_place (bool) – if false, creates a deep copy of the object and scales that instead

  • +
+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.spatial_counts.html b/reference/generated/csep.core.forecasts.GriddedForecast.spatial_counts.html new file mode 100644 index 00000000..baed8ed6 --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.spatial_counts.html @@ -0,0 +1,238 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.spatial_counts — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.spatial_counts

+
+
+GriddedForecast.spatial_counts(cartesian=False)
+

Integrates over magnitudes to return the spatial version of the forecast.

+
+
Parameters:
+

cartesian (bool) – if true, will return a 2d grid representing the bounding box of the forecast

+
+
Returns:
+

ndarray containing the count in each bin

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.sum.html b/reference/generated/csep.core.forecasts.GriddedForecast.sum.html new file mode 100644 index 00000000..ed1f2d67 --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.sum.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.sum — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.sum

+
+
+GriddedForecast.sum()
+

Sums over all of the forecast data

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.forecasts.GriddedForecast.target_event_rates.html b/reference/generated/csep.core.forecasts.GriddedForecast.target_event_rates.html new file mode 100644 index 00000000..f68d3e65 --- /dev/null +++ b/reference/generated/csep.core.forecasts.GriddedForecast.target_event_rates.html @@ -0,0 +1,250 @@ + + + + + + + + + csep.core.forecasts.GriddedForecast.target_event_rates — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.forecasts.GriddedForecast.target_event_rates

+
+
+GriddedForecast.target_event_rates(target_catalog, scale=False)[source]
+

Generates data set of target event rates given a target data.

+

The data should already be scaled to the same length as the forecast time horizon. Explicit checks for these +cases are not conducted in this function.

+

If scale=True then the target event rates will be scaled down to the rates for one day. This choice of time +can be made without a loss of generality. Please see Rhoades, D. A., D. Schorlemmer, M. C. Gerstenberger, +A. Christophersen, J. D. Zechar, and M. Imoto (2011). Efficient testing of earthquake forecasting models, +Acta Geophys 59 728-747.

+
+
Parameters:
+
    +
  • target_catalog (csep.core.data.AbstractBaseCatalog) – data containing target events

  • +
  • scale (bool) – if true, rates will be scaled to one day.

  • +
+
+
Returns:
+

target_event_rates, n_fore. target event rates are the

+
+
Return type:
+

out (tuple)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.poisson_evaluations.conditional_likelihood_test.html b/reference/generated/csep.core.poisson_evaluations.conditional_likelihood_test.html new file mode 100644 index 00000000..61b00e39 --- /dev/null +++ b/reference/generated/csep.core.poisson_evaluations.conditional_likelihood_test.html @@ -0,0 +1,236 @@ + + + + + + + + + csep.core.poisson_evaluations.conditional_likelihood_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.poisson_evaluations.conditional_likelihood_test

+
+
+csep.core.poisson_evaluations.conditional_likelihood_test(gridded_forecast, observed_catalog, num_simulations=1000, seed=None, random_numbers=None, verbose=False)[source]
+

Performs the conditional likelihood test on Gridded Forecast using an Observed Catalog.

+

This test normalizes the forecast so the forecasted rate are consistent with the observations. This modification +eliminates the strong impact differences in the number distribution have on the forecasted rates.

+

Note: The forecast and the observations should be scaled to the same time period before calling this function. This increases +transparency as no assumptions are being made about the length of the forecasts. This is particularly important for +gridded forecasts that supply their forecasts as rates.

+
+
Parameters:
+
    +
  • gridded_forecast – csep.core.forecasts.GriddedForecast

  • +
  • observed_catalog – csep.core.catalogs.Catalog

  • +
  • num_simulations (int) – number of simulations used to compute the quantile score

  • +
  • seed (int) – used fore reproducibility, and testing

  • +
  • random_numbers (numpy.ndarray) – random numbers used to override the random number generation. injection point for testing.

  • +
+
+
Returns:
+

csep.core.evaluations.EvaluationResult

+
+
Return type:
+

evaluation_result

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.poisson_evaluations.likelihood_test.html b/reference/generated/csep.core.poisson_evaluations.likelihood_test.html new file mode 100644 index 00000000..4ac0c2a4 --- /dev/null +++ b/reference/generated/csep.core.poisson_evaluations.likelihood_test.html @@ -0,0 +1,235 @@ + + + + + + + + + csep.core.poisson_evaluations.likelihood_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.poisson_evaluations.likelihood_test

+
+
+csep.core.poisson_evaluations.likelihood_test(gridded_forecast, observed_catalog, num_simulations=1000, seed=None, random_numbers=None, verbose=False)[source]
+

Performs the likelihood test on Gridded Forecast using an Observed Catalog.

+

Note: The forecast and the observations should be scaled to the same time period before calling this function. This increases +transparency as no assumptions are being made about the length of the forecasts. This is particularly important for +gridded forecasts that supply their forecasts as rates.

+
+
Parameters:
+
    +
  • gridded_forecast – csep.core.forecasts.GriddedForecast

  • +
  • observed_catalog – csep.core.catalogs.Catalog

  • +
  • num_simulations (int) – number of simulations used to compute the quantile score

  • +
  • seed (int) – used fore reproducibility, and testing

  • +
  • random_numbers (numpy.ndarray) – random numbers used to override the random number generation. +injection point for testing.

  • +
+
+
Returns:
+

csep.core.evaluations.EvaluationResult

+
+
Return type:
+

evaluation_result

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.poisson_evaluations.magnitude_test.html b/reference/generated/csep.core.poisson_evaluations.magnitude_test.html new file mode 100644 index 00000000..676d11f0 --- /dev/null +++ b/reference/generated/csep.core.poisson_evaluations.magnitude_test.html @@ -0,0 +1,234 @@ + + + + + + + + + csep.core.poisson_evaluations.magnitude_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.poisson_evaluations.magnitude_test

+
+
+csep.core.poisson_evaluations.magnitude_test(gridded_forecast, observed_catalog, num_simulations=1000, seed=None, random_numbers=None, verbose=False)[source]
+

Performs the Magnitude Test on a Gridded Forecast using an observed catalog.

+

Note: The forecast and the observations should be scaled to the same time period before calling this function. This increases +transparency as no assumptions are being made about the length of the forecasts. This is particularly important for +gridded forecasts that supply their forecasts as rates.

+
+
Parameters:
+
    +
  • gridded_forecast – csep.core.forecasts.GriddedForecast

  • +
  • observed_catalog – csep.core.catalogs.Catalog

  • +
  • num_simulations (int) – number of simulations used to compute the quantile score

  • +
  • seed (int) – used fore reproducibility, and testing

  • +
  • random_numbers (numpy.ndarray) – random numbers used to override the random number generation. injection point for testing.

  • +
+
+
Returns:
+

csep.core.evaluations.EvaluationResult

+
+
Return type:
+

evaluation_result

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.poisson_evaluations.number_test.html b/reference/generated/csep.core.poisson_evaluations.number_test.html new file mode 100644 index 00000000..b2840d60 --- /dev/null +++ b/reference/generated/csep.core.poisson_evaluations.number_test.html @@ -0,0 +1,237 @@ + + + + + + + + + csep.core.poisson_evaluations.number_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.poisson_evaluations.number_test

+
+
+csep.core.poisson_evaluations.number_test(gridded_forecast, observed_catalog)[source]
+

Computes “N-Test” on a gridded forecast. +author: @asim

+

Computes Number (N) test for Observed and Forecasts. Both data sets are expected to be in terms of event counts. +We find the Total number of events in Observed Catalog and Forecasted Catalogs. Which are then employed to compute the probablities of +(i) At least no. of events (delta 1) +(ii) At most no. of events (delta 2) assuming the poissonian distribution.

+
+
Parameters:
+
    +
  • observation – Observed (Gridded) seismicity (Numpy Array): +An Observation has to be Number of Events in Each Bin +It has to be a either zero or positive integer only (No Floating Point)

  • +
  • forecast – Forecast of a Model (Gridded) (Numpy Array) +A forecast has to be in terms of Average Number of Events in Each Bin +It can be anything greater than zero

  • +
+
+
Returns:
+

(delta_1, delta_2)

+
+
Return type:
+

out (tuple)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.poisson_evaluations.paired_t_test.html b/reference/generated/csep.core.poisson_evaluations.paired_t_test.html new file mode 100644 index 00000000..274d5969 --- /dev/null +++ b/reference/generated/csep.core.poisson_evaluations.paired_t_test.html @@ -0,0 +1,233 @@ + + + + + + + + + csep.core.poisson_evaluations.paired_t_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.poisson_evaluations.paired_t_test

+
+
+csep.core.poisson_evaluations.paired_t_test(forecast, benchmark_forecast, observed_catalog, alpha=0.05, scale=False)[source]
+

Computes the t-test for gridded earthquake forecasts.

+

This score is positively oriented, meaning that positive values of the information gain indicate that the +forecast is performing better than the benchmark forecast.

+
+
Parameters:
+
+
+
Returns:
+

csep.core.evaluations.EvaluationResult

+
+
Return type:
+

evaluation_result

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.poisson_evaluations.spatial_test.html b/reference/generated/csep.core.poisson_evaluations.spatial_test.html new file mode 100644 index 00000000..60c21180 --- /dev/null +++ b/reference/generated/csep.core.poisson_evaluations.spatial_test.html @@ -0,0 +1,234 @@ + + + + + + + + + csep.core.poisson_evaluations.spatial_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.poisson_evaluations.spatial_test

+
+
+csep.core.poisson_evaluations.spatial_test(gridded_forecast, observed_catalog, num_simulations=1000, seed=None, random_numbers=None, verbose=False)[source]
+

Performs the Spatial Test on the Forecast using the Observed Catalogs.

+

Note: The forecast and the observations should be scaled to the same time period before calling this function. This increases +transparency as no assumptions are being made about the length of the forecasts. This is particularly important for +gridded forecasts that supply their forecasts as rates.

+
+
Parameters:
+
    +
  • gridded_forecast – csep.core.forecasts.GriddedForecast

  • +
  • observed_catalog – csep.core.catalogs.Catalog

  • +
  • num_simulations (int) – number of simulations used to compute the quantile score

  • +
  • seed (int) – used fore reproducibility, and testing

  • +
  • random_numbers (numpy.ndarray) – random numbers used to override the random number generation. injection point for testing.

  • +
+
+
Returns:
+

csep.core.evaluations.EvaluationResult

+
+
Return type:
+

evaluation_result

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.poisson_evaluations.w_test.html b/reference/generated/csep.core.poisson_evaluations.w_test.html new file mode 100644 index 00000000..c6bda42f --- /dev/null +++ b/reference/generated/csep.core.poisson_evaluations.w_test.html @@ -0,0 +1,238 @@ + + + + + + + + + csep.core.poisson_evaluations.w_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.poisson_evaluations.w_test

+
+
+csep.core.poisson_evaluations.w_test(gridded_forecast1, gridded_forecast2, observed_catalog, scale=False)[source]
+

Calculate the Single Sample Wilcoxon signed-rank test between two gridded forecasts.

+

This test allows to test the null hypothesis that the median of Sample (X1(i)-X2(i)) is equal to a (N1-N2) / N_obs. +where, N1, N2 = Sum of expected values of Forecast_1 and Forecast_2, respectively.

+

The Wilcoxon signed-rank test tests the null hypothesis that difference of Xi and Yi come from the same distribution. +In particular, it tests whether the distribution of the differences is symmetric around given mean.

+

Parameters

+
+
Parameters:
+
    +
  • gridded_forecast1 – Forecast of a model_1 (Grided) (Numpy Array) +A forecast has to be in terms of Average Number of Events in Each Bin +It can be anything greater than zero

  • +
  • gridded_forecast2 – Forecast of model_2 (Grided) (Numpy Array) +A forecast has to be in terms of Average Number of Events in Each Bin +It can be anything greater than zero

  • +
  • observation – Observed (Grided) seismicity (Numpy Array): +An Observation has to be observed seismicity in each Bin +It has to be a either zero or positive integer only (No Floating Point)

  • +
+
+
+
+
Returns

out: csep.core.evaluations.EvaluationResult

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.regions.CartesianGrid2D.html b/reference/generated/csep.core.regions.CartesianGrid2D.html new file mode 100644 index 00000000..6f8415ff --- /dev/null +++ b/reference/generated/csep.core.regions.CartesianGrid2D.html @@ -0,0 +1,274 @@ + + + + + + + + + csep.core.regions.CartesianGrid2D — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.regions.CartesianGrid2D

+
+
+class csep.core.regions.CartesianGrid2D(polygons, dh, name='cartesian2d', mask=None)[source]
+

Represents a 2D cartesian gridded region.

+

The class provides functions to query onto an index 2D Cartesian grid and maintains a mapping between space coordinates defined +by polygons and the index into the polygon array.

+

Custom regions can be easily created by using the from_polygon classmethod. This function will accept an arbitrary closed +polygon and return a CartesianGrid class with only points inside the polygon to be valid.

+
+
+__init__(polygons, dh, name='cartesian2d', mask=None)[source]
+
+ +

Methods

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

__init__(polygons, dh[, name, mask])

from_dict(adict)

Creates a region object from a dictionary

from_origins(origins[, dh, magnitudes, name])

Creates instance of class from 2d numpy.array of lon/lat origins.

get_bbox()

Returns rectangular bounding box around region.

get_cartesian(data)

Returns 2d ndrray representation of the data set, corresponding to the bounding box.

get_cell_area()

Compute the area of each polygon in sq.

get_index_of(lons, lats)

Returns the index of lons, lats in self.polygons

get_location_of(indices)

Returns the polygon associated with the index idx.

get_masked(lons, lats)

Returns bool array lons and lats are not included in the spatial region.

midpoints()

Returns midpoints of rectangular polygons in region

origins()

Returns origins of rectangular polygons in region

tight_bbox([precision])

to_dict()

+

Attributes

+ + + + + + +

num_nodes

Number of polygons in region

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.regions.california_relm_region.html b/reference/generated/csep.core.regions.california_relm_region.html new file mode 100644 index 00000000..bb154032 --- /dev/null +++ b/reference/generated/csep.core.regions.california_relm_region.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.regions.california_relm_region — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.regions.california_relm_region

+
+
+csep.core.regions.california_relm_region(dh_scale=1, magnitudes=None, name='relm-california', use_midpoint=True)[source]
+

Returns class representing California testing region.

+

This region can +be used to create gridded datasets for earthquake forecasts. The XML file appears to use the +midpoint, and the .dat file uses the origin in the “lower left” corner.

+
+
Parameters:
+

dh_scale – can resample this grid by factors of 2

+
+
Returns:
+

csep.core.spatial.CartesianGrid2D

+
+
Raises:
+

ValueError – dh_scale must be a factor of two

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.regions.create_space_magnitude_region.html b/reference/generated/csep.core.regions.create_space_magnitude_region.html new file mode 100644 index 00000000..c13c73bd --- /dev/null +++ b/reference/generated/csep.core.regions.create_space_magnitude_region.html @@ -0,0 +1,213 @@ + + + + + + + + + csep.core.regions.create_space_magnitude_region — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.regions.create_space_magnitude_region

+
+
+csep.core.regions.create_space_magnitude_region(region, magnitudes)[source]
+

Simple wrapper to create space-magnitude region

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.regions.generate_aftershock_region.html b/reference/generated/csep.core.regions.generate_aftershock_region.html new file mode 100644 index 00000000..efeaf540 --- /dev/null +++ b/reference/generated/csep.core.regions.generate_aftershock_region.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.core.regions.generate_aftershock_region — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.regions.generate_aftershock_region

+
+
+csep.core.regions.generate_aftershock_region(mainshock_mw, mainshock_lon, mainshock_lat, num_radii=3, region=<function california_relm_region>, **kwargs)[source]
+

Creates a spatial region around a given epicenter

+

The method uses the Wells and Coppersmith scaling relationship to determine the average fault length and creates a +circular region centered at (mainshock_lon, mainshock_lat) with radius = num_radii.

+
+
Parameters:
+
    +
  • mainshock_mw (float) – magnitude of mainshock

  • +
  • mainshock_lon (float) – epicentral longitude

  • +
  • mainshock_lat (float) – epicentral latitude

  • +
  • num_radii (float/int) – number of radii of circular region

  • +
  • region (callable) – returns csep.utils.spatial.CartesianGrid2D

  • +
  • **kwargs (dict) – passed to region callable

  • +
+
+
Returns:
+

csep.utils.spatial.CartesianGrid2D

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.regions.global_region.html b/reference/generated/csep.core.regions.global_region.html new file mode 100644 index 00000000..98e3f22f --- /dev/null +++ b/reference/generated/csep.core.regions.global_region.html @@ -0,0 +1,222 @@ + + + + + + + + + csep.core.regions.global_region — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.regions.global_region

+
+
+csep.core.regions.global_region(dh=0.1, name='global', magnitudes=None)[source]
+

Creates a global region used for evaluating gridded forecasts on the global scale.

+

The gridded region corresponds to the

+
+
Parameters:
+

dh

+
+
Return type:
+

csep.utils.CartesianGrid2D

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.regions.increase_grid_resolution.html b/reference/generated/csep.core.regions.increase_grid_resolution.html new file mode 100644 index 00000000..5ad092ed --- /dev/null +++ b/reference/generated/csep.core.regions.increase_grid_resolution.html @@ -0,0 +1,229 @@ + + + + + + + + + csep.core.regions.increase_grid_resolution — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.regions.increase_grid_resolution

+
+
+csep.core.regions.increase_grid_resolution(points, dh, factor)[source]
+

Takes a set of origin points and returns a new set with higher grid resolution. assumes the origin point is in the +lower left corner. the new dh is dh / factor. This implementation requires that the decimation factor be a multiple of 2.

+
+
Parameters:
+
    +
  • points – list of (lon,lat) tuples

  • +
  • dh – old grid spacing

  • +
  • factor – amount to reduce

  • +
+
+
Returns:
+

list of (lon,lat) tuples with spacing dh / scale

+
+
Return type:
+

points

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.regions.italy_csep_region.html b/reference/generated/csep.core.regions.italy_csep_region.html new file mode 100644 index 00000000..fe0ae921 --- /dev/null +++ b/reference/generated/csep.core.regions.italy_csep_region.html @@ -0,0 +1,232 @@ + + + + + + + + + csep.core.regions.italy_csep_region — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.regions.italy_csep_region

+
+
+csep.core.regions.italy_csep_region(dh_scale=1, magnitudes=None, name='csep-italy', use_midpoint=True)[source]
+

Returns class representing Italian testing region.

+

This region can be used to create gridded datasets for earthquake forecasts. The region is defined by the +file ‘forecast.italy.M5.xml’ and contains a spatially gridded region with 0.1° x 0.1° cells.

+
+
Parameters:
+
    +
  • dh_scale – can resample this grid by factors of 2

  • +
  • magnitudes (array-like) – bin edges for magnitudes. if provided, will be bound to the output region class. +this argument provides a short-cut for creating space-magnitude regions.

  • +
  • name (str) – human readable identify given to the region

  • +
  • use_midpoint (bool) – if true, treat values in file as midpoints. default = true.

  • +
+
+
Returns:
+

csep.core.spatial.CartesianGrid2D

+
+
Raises:
+

ValueError – dh_scale must be a factor of two

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.regions.magnitude_bins.html b/reference/generated/csep.core.regions.magnitude_bins.html new file mode 100644 index 00000000..9e5d4610 --- /dev/null +++ b/reference/generated/csep.core.regions.magnitude_bins.html @@ -0,0 +1,228 @@ + + + + + + + + + csep.core.regions.magnitude_bins — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.regions.magnitude_bins

+
+
+csep.core.regions.magnitude_bins(start_magnitude, end_magnitude, dmw)[source]
+

Returns array holding magnitude bin edges.

+

The output from this function is monotonically increasing and equally spaced bin edges that can represent magnitude +bins.

+
+
+
Args:

start_magnitude (float) +end_magnitude (float) +dmw (float): magnitude spacing

+
+
+
+
+
Returns:
+

bin_edges (numpy.ndarray)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.regions.masked_region.html b/reference/generated/csep.core.regions.masked_region.html new file mode 100644 index 00000000..b32cdccb --- /dev/null +++ b/reference/generated/csep.core.regions.masked_region.html @@ -0,0 +1,227 @@ + + + + + + + + + csep.core.regions.masked_region — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.regions.masked_region

+
+
+csep.core.regions.masked_region(region, polygon)[source]
+

Build a new region based off the coordinates in the polygon.

+
+
Parameters:
+
    +
  • region – CartesianGrid2D object

  • +
  • polygon – Polygon object

  • +
+
+
Returns:
+

CartesianGrid2D object

+
+
Return type:
+

new_region

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.core.regions.parse_csep_template.html b/reference/generated/csep.core.regions.parse_csep_template.html new file mode 100644 index 00000000..addeea7f --- /dev/null +++ b/reference/generated/csep.core.regions.parse_csep_template.html @@ -0,0 +1,219 @@ + + + + + + + + + csep.core.regions.parse_csep_template — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.core.regions.parse_csep_template

+
+
+csep.core.regions.parse_csep_template(xml_filename)[source]
+

Reads CSEP XML template file and returns the lat/lon values +for the forecast.

+
+
Returns:
+

list of tuples where tuple is (lon, lat)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.load_catalog.html b/reference/generated/csep.load_catalog.html new file mode 100644 index 00000000..3587c00e --- /dev/null +++ b/reference/generated/csep.load_catalog.html @@ -0,0 +1,222 @@ + + + + + + + + + csep.load_catalog — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.load_catalog

+
+
+csep.load_catalog(filename, type='csep-csv', format='native', loader=None, apply_filters=False, **kwargs)[source]
+

General function to load single catalog

+

See corresponding class documentation for additional parameters.

+
+
Parameters:
+
    +
  • type (str) – (‘ucerf3’, ‘csep-csv’, ‘zmap’, ‘jma-csv’, ‘ndk’) default is ‘csep-csv’

  • +
  • format (str) – (‘native’, ‘csep’) determines whether the catalog should be converted into the csep +formatted catalog or kept as native.

  • +
  • apply_filters (bool) – if true, will apply filters and spatial filter to catalog. time-varying magnitude completeness +will still need to be applied. filters kwarg should be included. see catalog +documentation for more details.

  • +
+
+
+

Returns (AbstractBaseCatalog)

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.load_catalog_forecast.html b/reference/generated/csep.load_catalog_forecast.html new file mode 100644 index 00000000..f428932a --- /dev/null +++ b/reference/generated/csep.load_catalog_forecast.html @@ -0,0 +1,225 @@ + + + + + + + + + csep.load_catalog_forecast — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.load_catalog_forecast

+
+
+csep.load_catalog_forecast(fname, catalog_loader=None, format='native', type='ascii', **kwargs)[source]
+

General function to handle loading catalog forecasts.

+

Currently, just a simple wrapper, but can contain more complex logic in the future.

+
+
Parameters:
+
    +
  • fname (str) – pathname to the forecast file or directory containing the forecast files

  • +
  • catalog_loader (func) – callable that can load catalogs, see load_stochastic_event_sets above.

  • +
  • format (str) – either ‘native’ or ‘csep’. if ‘csep’, will attempt to be returned into csep catalog format. used to convert between +observed_catalog type.

  • +
  • type (str) – either ‘ucerf3’ or ‘csep’, determines the catalog format of the forecast. if loader is provided, then +this parameter is ignored.

  • +
  • **kwargs – other keyword arguments passed to the csep.core.forecasts.CatalogForecast.

  • +
+
+
Returns:
+

csep.core.forecasts.CatalogForecast

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.load_gridded_forecast.html b/reference/generated/csep.load_gridded_forecast.html new file mode 100644 index 00000000..c8076ed5 --- /dev/null +++ b/reference/generated/csep.load_gridded_forecast.html @@ -0,0 +1,240 @@ + + + + + + + + + csep.load_gridded_forecast — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.load_gridded_forecast

+
+
+csep.load_gridded_forecast(fname, loader=None, **kwargs)[source]
+

Loads grid based forecast from hard-disk.

+

The function loads the forecast provided with at the filepath defined by fname. The function attempts to understand +the file format based on the extension of the filepath. Optionally, if loader function is provided, that function +will be used to load the forecast. The loader function should return a csep.core.forecasts.GriddedForecast +class with the region and magnitude members correctly assigned.

+
+
File extensions:

.dat -> CSEP ascii format +.xml -> CSEP xml format (not yet implemented) +.h5 -> CSEP hdf5 format (not yet implemented) +.bin -> CSEP binary format (not yet implemented)

+
+
+
+
Parameters:
+
    +
  • fname (str) – path of grid based forecast

  • +
  • loader (func) – function to load forecast in bespoke format needs to return csep.core.forecasts.GriddedForecast +and first argument should be required and the filename of the forecast to load +called as loader(func, **kwargs).

  • +
  • **kwargs – passed into loader function

  • +
+
+
+
+
Throws:

FileNotFoundError: when the file extension is not known and a loader is not provided. +AttributeError: if loader is provided and is not callable.

+
+
+
+
Returns:
+

csep.core.forecasts.GriddedForecast

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.load_stochastic_event_sets.html b/reference/generated/csep.load_stochastic_event_sets.html new file mode 100644 index 00000000..eed6b7a9 --- /dev/null +++ b/reference/generated/csep.load_stochastic_event_sets.html @@ -0,0 +1,227 @@ + + + + + + + + + csep.load_stochastic_event_sets — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.load_stochastic_event_sets

+
+
+csep.load_stochastic_event_sets(filename, type='csv', format='native', **kwargs)[source]
+

General function to load stochastic event sets

+

This function returns a generator to iterate through a collection of catalogs. +To load a forecast and include metadata use csep.load_catalog_forecast().

+
+
Parameters:
+
    +
  • filename (str) – name of file or directory where stochastic event sets live.

  • +
  • type (str) – either ‘ucerf3’ or ‘csep’ depending on the type of observed_catalog to load

  • +
  • format (str) – (‘csep’ or ‘native’) if native catalogs are not converted to csep format.

  • +
  • kwargs (dict) – see the documentation of that class corresponding to the type you selected +for the kwargs options

  • +
+
+
Returns:
+

AbstractBaseCatalog

+
+
Return type:
+

(generator)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.query_bsi.html b/reference/generated/csep.query_bsi.html new file mode 100644 index 00000000..9806bdfd --- /dev/null +++ b/reference/generated/csep.query_bsi.html @@ -0,0 +1,226 @@ + + + + + + + + + csep.query_bsi — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.query_bsi

+
+
+csep.query_bsi(start_time, end_time, min_magnitude=2.5, min_latitude=32.0, max_latitude=50.0, min_longitude=2.0, max_longitude=21.0, max_depth=1000, verbose=True, apply_filters=False, **kwargs)[source]
+

Access BSI catalog through web service

+
+
Parameters:
+
    +
  • start_time – datetime object of start of catalog

  • +
  • end_time – datetime object for end of catalog

  • +
  • min_magnitude – minimum magnitude to query

  • +
  • min_latitude – maximum magnitude to query

  • +
  • max_latitude – max latitude of bounding box

  • +
  • min_longitude – min latitude of bounding box

  • +
  • max_longitude – max longitude of bounding box

  • +
  • max_depth – maximum depth of the bounding box

  • +
  • verbose (bool) – print catalog summary statistics

  • +
+
+
Returns:
+

class:`csep.core.catalogs.CSEPCatalog

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.query_comcat.html b/reference/generated/csep.query_comcat.html new file mode 100644 index 00000000..3ff324b0 --- /dev/null +++ b/reference/generated/csep.query_comcat.html @@ -0,0 +1,226 @@ + + + + + + + + + csep.query_comcat — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.query_comcat

+
+
+csep.query_comcat(start_time, end_time, min_magnitude=2.5, min_latitude=31.5, max_latitude=43.0, min_longitude=-125.4, max_longitude=-113.1, max_depth=1000, verbose=True, apply_filters=False, **kwargs)[source]
+

Access Comcat catalog through web service

+
+
Parameters:
+
    +
  • start_time – datetime object of start of catalog

  • +
  • end_time – datetime object for end of catalog

  • +
  • min_magnitude – minimum magnitude to query

  • +
  • min_latitude – maximum magnitude to query

  • +
  • max_latitude – max latitude of bounding box

  • +
  • min_longitude – min latitude of bounding box

  • +
  • max_longitude – max longitude of bounding box

  • +
  • max_depth – maximum depth of the bounding box

  • +
  • verbose (bool) – print catalog summary statistics

  • +
+
+
Returns:
+

class:`csep.core.catalogs.CSEPCatalog

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.basic_types.AdaptiveHistogram.html b/reference/generated/csep.utils.basic_types.AdaptiveHistogram.html new file mode 100644 index 00000000..8a5153ca --- /dev/null +++ b/reference/generated/csep.utils.basic_types.AdaptiveHistogram.html @@ -0,0 +1,228 @@ + + + + + + + + + csep.utils.basic_types.AdaptiveHistogram — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.basic_types.AdaptiveHistogram

+
+
+class csep.utils.basic_types.AdaptiveHistogram(dh=0.1, anchor=0.0)[source]
+

Allows us to work with data that need to be discretized and aggregated even though the the global min/max values +are not known before hand. Data are discretized according to the dh and anchor positions and their extreme values. +If necessary the range of the bin_edges are expanded to accommodate new data

+

Using this function incurs some addition overhead, instead of simply just binning and combining.

+
+
+__init__(dh=0.1, anchor=0.0)[source]
+
+ +

Methods

+ + + + + + + + + +

__init__([dh, anchor])

add(data)

+

Attributes

+ + + + + + +

rec_dh

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.calc.bin1d_vec.html b/reference/generated/csep.utils.calc.bin1d_vec.html new file mode 100644 index 00000000..3e604063 --- /dev/null +++ b/reference/generated/csep.utils.calc.bin1d_vec.html @@ -0,0 +1,228 @@ + + + + + + + + + csep.utils.calc.bin1d_vec — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.calc.bin1d_vec

+
+
+csep.utils.calc.bin1d_vec(p, bins, tol=None, right_continuous=False)[source]
+

Efficient implementation of binning routine on 1D Cartesian Grid.

+

Returns the indices of the points into bins. Bins are inclusive on the lower bound +and exclusive on the upper bound. In the case where a point does not fall within the bins a -1 +will be returned. The last bin extends to infinity when right_continuous is set as true.

+
+
Parameters:
+
    +
  • p (array-like) – Point(s) to be placed into b

  • +
  • bins (array-like) – bins to considering for binning, must be monotonically increasing

  • +
  • right_continuous (bool) – if true, consider last bin extending to infinity

  • +
+
+
Returns:
+

indexes hashed into grid

+
+
Return type:
+

idx (array-like)

+
+
Raises:
+

ValueError

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.calc.discretize.html b/reference/generated/csep.utils.calc.discretize.html new file mode 100644 index 00000000..782a1f93 --- /dev/null +++ b/reference/generated/csep.utils.calc.discretize.html @@ -0,0 +1,210 @@ + + + + + + + + + csep.utils.calc.discretize — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.calc.discretize

+
+
+csep.utils.calc.discretize(data, bin_edges, right_continuous=False)[source]
+

returns array with len(bin_edges) consisting of the discretized values from each bin. +instead of returning the counts of each bin, this will return an array with values +modified such that any value within bin_edges[0] <= x_new < bin_edges[1] ==> bin_edges[0].

+

This implementation forces you to define a bin edge that contains the data.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.calc.find_nearest.html b/reference/generated/csep.utils.calc.find_nearest.html new file mode 100644 index 00000000..96e45a6e --- /dev/null +++ b/reference/generated/csep.utils.calc.find_nearest.html @@ -0,0 +1,207 @@ + + + + + + + + + csep.utils.calc.find_nearest — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.calc.find_nearest

+
+
+csep.utils.calc.find_nearest(array, value)[source]
+

Returns the value from array that is less than the value specified.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.calc.func_inverse.html b/reference/generated/csep.utils.calc.func_inverse.html new file mode 100644 index 00000000..6c4236a9 --- /dev/null +++ b/reference/generated/csep.utils.calc.func_inverse.html @@ -0,0 +1,207 @@ + + + + + + + + + csep.utils.calc.func_inverse — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.calc.func_inverse

+
+
+csep.utils.calc.func_inverse(x, y, val, kind='nearest', **kwargs)[source]
+

Returns the value of a function based on interpolation.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.calc.nearest_index.html b/reference/generated/csep.utils.calc.nearest_index.html new file mode 100644 index 00000000..792b2b57 --- /dev/null +++ b/reference/generated/csep.utils.calc.nearest_index.html @@ -0,0 +1,207 @@ + + + + + + + + + csep.utils.calc.nearest_index — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.calc.nearest_index

+
+
+csep.utils.calc.nearest_index(array, value)[source]
+

Returns the index from array that is less than the value specified.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.comcat.get_event_by_id.html b/reference/generated/csep.utils.comcat.get_event_by_id.html new file mode 100644 index 00000000..62932f6c --- /dev/null +++ b/reference/generated/csep.utils.comcat.get_event_by_id.html @@ -0,0 +1,222 @@ + + + + + + + + + csep.utils.comcat.get_event_by_id — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.comcat.get_event_by_id

+
+
+csep.utils.comcat.get_event_by_id(eventid, catalog=None, includedeleted=False, includesuperseded=False, host=None)[source]
+

Search the ComCat database for an event matching the input event id. +This search function is a wrapper around the ComCat Web API described here: +https://earthquake.usgs.gov/fdsnws/event/1/ +Some of the search parameters described there are NOT implemented here, usually because they do not +apply to GeoJSON search results, which we are getting here and parsing into Python data structures. +This function returns a DetailEvent object, described elsewhere in this package. +Usage:

+
+
Parameters:
+
    +
  • eventid (str) – Select a specific event by ID; event identifiers are data center specific.

  • +
  • includesuperseded (bool) – Specify if superseded products should be included. This also includes all +deleted products, and is mutually exclusive to the includedeleted parameter.

  • +
  • includedeleted (bool) – Specify if deleted products should be incuded.

  • +
  • host (str) – Replace default ComCat host (earthquake.usgs.gov) with a custom host.

  • +
+
+
+

Returns: DetailEvent object.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.comcat.search.html b/reference/generated/csep.utils.comcat.search.html new file mode 100644 index 00000000..f0950203 --- /dev/null +++ b/reference/generated/csep.utils.comcat.search.html @@ -0,0 +1,307 @@ + + + + + + + + + csep.utils.comcat.search — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + + + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.add_labels_for_publication.html b/reference/generated/csep.utils.plots.add_labels_for_publication.html new file mode 100644 index 00000000..a39b00c1 --- /dev/null +++ b/reference/generated/csep.utils.plots.add_labels_for_publication.html @@ -0,0 +1,220 @@ + + + + + + + + + csep.utils.plots.add_labels_for_publication — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.add_labels_for_publication

+
+
+csep.utils.plots.add_labels_for_publication(figure, style='bssa', labelsize=16)[source]
+

Adds publication labels too the outside of a figure.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_basemap.html b/reference/generated/csep.utils.plots.plot_basemap.html new file mode 100644 index 00000000..47d28417 --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_basemap.html @@ -0,0 +1,246 @@ + + + + + + + + + csep.utils.plots.plot_basemap — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_basemap

+
+
+csep.utils.plots.plot_basemap(basemap, extent, ax=None, figsize=None, coastline=True, borders=False, tile_scaling='auto', set_global=False, projection=<Projected CRS: +proj=eqc +ellps=WGS84 +a=6378137.0 +lon_0=0.0 +to ...> Name: unknown Axis Info [cartesian]: - E[east]: Easting (unknown) - N[north]: Northing (unknown) - h[up]: Ellipsoidal height (metre) Area of Use: - undefined Coordinate Operation: - name: unknown - method: Equidistant Cylindrical Datum: Unknown based on WGS 84 ellipsoid - Ellipsoid: WGS 84 - Prime Meridian: Greenwich, apprx=False, central_latitude=0.0, linecolor='black', linewidth=True, grid=False, grid_labels=False, grid_fontsize=None, show=False)[source]
+

Wrapper function for multiple cartopy base plots, including access to standard raster webservices

+
+
Parameters:
+
    +
  • basemap (str) – Possible values are: stock_img, stamen_terrain, stamen_terrain-background, google-satellite, ESRI_terrain, ESRI_imagery, ESRI_relief, ESRI_topo, ESRI_terrain, or webservice link (see examples in csep.utils.plots._get_basemap(). Default is None

  • +
  • extent (list) – [lon_min, lon_max, lat_min, lat_max]

  • +
  • ax (matplotlib.pyplot.ax) – Previously defined ax object

  • +
  • figsize (tuple) – If no ax is provided, a tuple of floats can be provided to define figure size

  • +
  • coastline (str) – Flag to plot coastline. default True,

  • +
  • borders (bool) – Flag to plot country borders. default False,

  • +
  • tile_scaling (str/int) – Zoom level (1-12) of the basemap tiles. If ‘auto’, is automatically derived from extent

  • +
  • set_global (bool) – Display the complete globe as basemap

  • +
  • projection (cartopy.crs.Projection) – Projection to be used in the basemap

  • +
  • apprx (bool) – If true, approximates transformation by setting aspect ratio of axes based on middle latitude

  • +
  • central_latitude (float) – average latitude from plotting region

  • +
  • linecolor (str) – Color of borders and coast lines. default ‘black’,

  • +
  • linewidth (float) – Line width of borders and coast lines. default 1.5,

  • +
  • grid (bool) – Draws a grid in the basemap

  • +
  • grid_labels (bool) – Annotate grid values

  • +
  • grid_fontsize (float) – Font size of the grid x and y labels

  • +
  • show (bool) – Flag if the figure is displayed

  • +
+
+
Returns:
+

matplotlib.pyplot.ax object

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_calibration_test.html b/reference/generated/csep.utils.plots.plot_calibration_test.html new file mode 100644 index 00000000..98ead129 --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_calibration_test.html @@ -0,0 +1,219 @@ + + + + + + + + + csep.utils.plots.plot_calibration_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_calibration_test

+
+
+csep.utils.plots.plot_calibration_test(evaluation_result, axes=None, plot_args=None, show=False)[source]
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_catalog.html b/reference/generated/csep.utils.plots.plot_catalog.html new file mode 100644 index 00000000..3d28141e --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_catalog.html @@ -0,0 +1,383 @@ + + + + + + + + + csep.utils.plots.plot_catalog — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_catalog

+
+
+csep.utils.plots.plot_catalog(catalog, ax=None, show=False, extent=None, set_global=False, plot_args=None)[source]
+

Plot catalog in a region

+
+
Parameters:
+
    +
  • catalog (CSEPCatalog) – Catalog object to be plotted

  • +
  • ax (matplotlib.pyplot.ax) – Previously defined ax object (e.g from plot_spatial_dataset)

  • +
  • show (bool) – Flag if the figure is displayed

  • +
  • extent (list) – default 1.05-catalog.region.get_bbox()

  • +
  • set_global (bool) – Display the complete globe as basemap

  • +
  • plot_args (dict) –

    matplotlib and cartopy plot arguments. The dictionary keys are str, whose items can be:

    +
      +
    • +
      figsize:
      +

      tuple/list - default [6.4, 4.8]

      +
      +
      +
    • +
    • +
      title:
      +

      str - default catalog.name

      +
      +
      +
    • +
    • +
      title_size:
      +

      int - default 10

      +
      +
      +
    • +
    • +
      filename:
      +

      str - File to save figure. default None

      +
      +
      +
    • +
    • +
      projection:
      +

      cartopy.crs.Projection - default cartopy.crs.PlateCarree. Note: this can be +‘fast’ to apply an approximate transformation of axes.

      +
      +
      +
    • +
    • +
      basemap:
      +

      str/None. Possible values are: stock_img, stamen_terrain, stamen_terrain-background, google-satellite, ESRI_terrain, ESRI_imagery, ESRI_relief, ESRI_topo, ESRI_terrain, or webservice link. Default is None

      +
      +
      +
    • +
    • +
      coastline:
      +

      bool - Flag to plot coastline. default True,

      +
      +
      +
    • +
    • +
      grid:
      +

      bool - default True

      +
      +
      +
    • +
    • +
      grid_labels:
      +

      bool - default True

      +
      +
      +
    • +
    • +
      grid_fontsize:
      +

      float - default 10.0

      +
      +
      +
    • +
    • +
      marker:
      +

      str - Marker type

      +
      +
      +
    • +
    • +
      markersize:
      +

      float - Constant size for all earthquakes

      +
      +
      +
    • +
    • +
      markercolor:
      +

      str - Color for all earthquakes

      +
      +
      +
    • +
    • +
      borders:
      +

      bool - Flag to plot country borders. default False,

      +
      +
      +
    • +
    • +
      region_border:
      +

      bool - Flag to plot the catalog region border. default True,

      +
      +
      +
    • +
    • +
      alpha:
      +

      float - Transparency for the earthquakes scatter

      +
      +
      +
    • +
    • +
      mag_scale:
      +

      float - Scaling of the scatter

      +
      +
      +
    • +
    • +
      legend:
      +

      bool - Flag to display the legend box

      +
      +
      +
    • +
    • +
      legend_loc:
      +

      int/str - Position of the legend

      +
      +
      +
    • +
    • +
      mag_ticks:
      +

      list - Ticks to display in the legend

      +
      +
      +
    • +
    • +
      labelspacing:
      +

      int - Separation between legend ticks

      +
      +
      +
    • +
    • +
      tile_scaling:
      +

      str/int. Zoom level (1-12) of the basemap tiles. If ‘auto’, is automatically derived from extent

      +
      +
      +
    • +
    • +
      linewidth:
      +

      float - Line width of borders and coast lines. default 1.5,

      +
      +
      +
    • +
    • +
      linecolor:
      +

      str - Color of borders and coast lines. default ‘black’,

      +
      +
      +
    • +
    +

  • +
+
+
Returns:
+

matplotlib.pyplot.ax object

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_comparison_test.html b/reference/generated/csep.utils.plots.plot_comparison_test.html new file mode 100644 index 00000000..8d48ddaf --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_comparison_test.html @@ -0,0 +1,220 @@ + + + + + + + + + csep.utils.plots.plot_comparison_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_comparison_test

+
+
+csep.utils.plots.plot_comparison_test(results_t, results_w=None, axes=None, plot_args=None)[source]
+

Plots list of T-Test (and W-Test) Results

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_cumulative_events_versus_time.html b/reference/generated/csep.utils.plots.plot_cumulative_events_versus_time.html new file mode 100644 index 00000000..66a2d63e --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_cumulative_events_versus_time.html @@ -0,0 +1,236 @@ + + + + + + + + + csep.utils.plots.plot_cumulative_events_versus_time — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_cumulative_events_versus_time

+
+
+csep.utils.plots.plot_cumulative_events_versus_time(stochastic_event_sets, observation, show=False, plot_args=None)[source]
+

Same as below but performs the statistics on numpy arrays without using pandas data frames.

+
+
Parameters:
+
    +
  • stochastic_event_sets

  • +
  • observation

  • +
  • show

  • +
  • plot_args

  • +
+
+
Returns:
+

matplotlib.Axes

+
+
Return type:
+

ax

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_distribution_test.html b/reference/generated/csep.utils.plots.plot_distribution_test.html new file mode 100644 index 00000000..bdaa2b91 --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_distribution_test.html @@ -0,0 +1,232 @@ + + + + + + + + + csep.utils.plots.plot_distribution_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_distribution_test

+
+
+csep.utils.plots.plot_distribution_test(evaluation_result, axes=None, show=True, plot_args=None)[source]
+

Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation +for the M-test.

+
+
Parameters:
+

evaluation_result – object-like var that implements the interface of the above EvaluationResult

+
+
Returns:
+

can be used to modify the figure

+
+
Return type:
+

ax (matplotlib.axes.Axes)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_ecdf.html b/reference/generated/csep.utils.plots.plot_ecdf.html new file mode 100644 index 00000000..e6918140 --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_ecdf.html @@ -0,0 +1,220 @@ + + + + + + + + + csep.utils.plots.plot_ecdf — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_ecdf

+
+
+csep.utils.plots.plot_ecdf(x, ecdf, axes=None, xv=None, show=False, plot_args=None)[source]
+

Plots empirical cumulative distribution function.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_histogram.html b/reference/generated/csep.utils.plots.plot_histogram.html new file mode 100644 index 00000000..6f87e21b --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_histogram.html @@ -0,0 +1,247 @@ + + + + + + + + + csep.utils.plots.plot_histogram — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_histogram

+
+
+csep.utils.plots.plot_histogram(simulated, observation, bins='fd', percentile=None, show=False, axes=None, catalog=None, plot_args=None)[source]
+

Plots histogram of single statistic for stochastic event sets and observations. The function will behave differently +depending on the inumpyuts.

+

Simulated should always be either a list or numpy.array where there would be one value per data in the stochastic event +set. Observation could either be a scalar or a numpy.array/list. If observation is a scale a vertical line would be +plotted, if observation is iterable a second histogram would be plotted.

+

This allows for comparisons to be made against catalogs where there are multiple values e.g., magnitude, and single values +e.g., event count.

+

If an axis handle is included, additional function calls will only addition extra simulations, observations will not be +plotted. Since this function returns an axes handle, any extra modifications to the figure can be made using that.

+
+
Parameters:
+
    +
  • simulated (numpy.arrays) – numpy.array like representation of statistics computed from catalogs.

  • +
  • observation (numpy.array or scalar) – observation to plot against stochastic event set

  • +
  • filename (str) – filename to save figure

  • +
  • show (bool) – show interactive version of the figure

  • +
  • ax (axis object) – axis object with interface defined by matplotlib

  • +
  • catalog (csep.AbstractBaseCatalog) – used for annotating the figures

  • +
  • plot_args (dict) – additional plotting commands. TODO: Documentation

  • +
+
+
Returns:
+

matplolib axes handle

+
+
Return type:
+

axis

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_likelihood_test.html b/reference/generated/csep.utils.plots.plot_likelihood_test.html new file mode 100644 index 00000000..b9cbc39b --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_likelihood_test.html @@ -0,0 +1,256 @@ + + + + + + + + + csep.utils.plots.plot_likelihood_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_likelihood_test

+
+
+csep.utils.plots.plot_likelihood_test(evaluation_result, axes=None, show=True, plot_args=None)[source]
+

Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation +for the L-test.

+
+
Parameters:
+
    +
  • evaluation_result – object-like var that implements the interface of the above EvaluationResult

  • +
  • axes (matplotlib.Axes) – axes object used to chain this plot

  • +
  • show (bool) – if true, call pyplot.show()

  • +
  • plot_args (dict) – optional argument containing a dictionary of plotting arguments, with keys as strings and items as described below

  • +
+
+
+
+
Optional plotting arguments:
    +
  • figsize: (list/tuple) - default: [6.4, 4.8]

  • +
  • title: (str) - default: name of the first evaluation result type

  • +
  • title_fontsize: (float) Fontsize of the plot title - default: 10

  • +
  • xlabel: (str) - default: ‘X’

  • +
  • xlabel_fontsize: (float) - default: 10

  • +
  • xticks_fontsize: (float) - default: 10

  • +
  • ylabel_fontsize: (float) - default: 10

  • +
  • text_fontsize: (float) - default: 14

  • +
  • tight_layout: (bool) Set matplotlib.figure.tight_layout to remove excess blank space in the plot - default: True

  • +
  • percentile (float) Critial region to shade on histogram - default: 95

  • +
  • bins: (str) - Set binning type. see matplotlib.hist for more info - default: ‘auto’

  • +
  • xy: (list/tuple) - default: (0.55, 0.3)

  • +
+
+
+
+
Returns:
+

can be used to modify the figure

+
+
Return type:
+

ax (matplotlib.axes.Axes)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_magnitude_histogram.html b/reference/generated/csep.utils.plots.plot_magnitude_histogram.html new file mode 100644 index 00000000..ac8c84f5 --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_magnitude_histogram.html @@ -0,0 +1,220 @@ + + + + + + + + + csep.utils.plots.plot_magnitude_histogram — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_magnitude_histogram

+
+
+csep.utils.plots.plot_magnitude_histogram(catalogs, comcat, show=True, plot_args=None)[source]
+

Generates a magnitude histogram from a catalog-based forecast

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_magnitude_test.html b/reference/generated/csep.utils.plots.plot_magnitude_test.html new file mode 100644 index 00000000..0b496f1f --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_magnitude_test.html @@ -0,0 +1,255 @@ + + + + + + + + + csep.utils.plots.plot_magnitude_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_magnitude_test

+
+
+csep.utils.plots.plot_magnitude_test(evaluation_result, axes=None, show=True, plot_args=None)[source]
+

Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation +for the M-test.

+
+
Parameters:
+
    +
  • evaluation_result – object-like var that implements the interface of the above EvaluationResult

  • +
  • axes (matplotlib.Axes) – axes object used to chain this plot

  • +
  • show (bool) – if true, call pyplot.show()

  • +
  • plot_args (dict) – optional argument containing a dictionary of plotting arguments, with keys as strings and items as described below

  • +
+
+
+
+
Optional plotting arguments:
    +
  • figsize: (list/tuple) - default: [6.4, 4.8]

  • +
  • title: (str) - default: name of the first evaluation result type

  • +
  • title_fontsize: (float) Fontsize of the plot title - default: 10

  • +
  • xlabel: (str) - default: ‘X’

  • +
  • xlabel_fontsize: (float) - default: 10

  • +
  • xticks_fontsize: (float) - default: 10

  • +
  • ylabel_fontsize: (float) - default: 10

  • +
  • tight_layout: (bool) Set matplotlib.figure.tight_layout to remove excess blank space in the plot - default: True

  • +
  • percentile (float) Critial region to shade on histogram - default: 95

  • +
  • bins: (str) - Set binning type. see matplotlib.hist for more info - default: ‘auto’

  • +
  • xy: (list/tuple) - default: (0.55, 0.6)

  • +
+
+
+
+
Returns:
+

containing the new plot

+
+
Return type:
+

ax (matplotlib.Axes)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_magnitude_versus_time.html b/reference/generated/csep.utils.plots.plot_magnitude_versus_time.html new file mode 100644 index 00000000..a2a2b20f --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_magnitude_versus_time.html @@ -0,0 +1,232 @@ + + + + + + + + + csep.utils.plots.plot_magnitude_versus_time — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_magnitude_versus_time

+
+
+csep.utils.plots.plot_magnitude_versus_time(catalog, filename=None, show=False, reset_times=False, plot_args=None, **kwargs)[source]
+

Plots magnitude versus linear time for an earthquake data.

+

Catalog class must implement get_magnitudes() and get_datetimes() in order for this function to work correctly.

+
+
Parameters:
+

catalog (AbstractBaseCatalog) – data to visualize

+
+
Returns:
+

fig and axes handle

+
+
Return type:
+

(tuple)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_number_test.html b/reference/generated/csep.utils.plots.plot_number_test.html new file mode 100644 index 00000000..9aa883b6 --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_number_test.html @@ -0,0 +1,256 @@ + + + + + + + + + csep.utils.plots.plot_number_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_number_test

+
+
+csep.utils.plots.plot_number_test(evaluation_result, axes=None, show=True, plot_args=None)[source]
+

Takes result from evaluation and generates a specific histogram plot to show the results of the statistical evaluation +for the n-test.

+
+
Parameters:
+
    +
  • evaluation_result – object-like var that implements the interface of the above EvaluationResult

  • +
  • axes (matplotlib.Axes) – axes object used to chain this plot

  • +
  • show (bool) – if true, call pyplot.show()

  • +
  • plot_args (dict) – optional argument containing a dictionary of plotting arguments, with keys as strings and items as described below

  • +
+
+
+
+
Optional plotting arguments:
    +
  • figsize: (list/tuple) - default: [6.4, 4.8]

  • +
  • title: (str) - default: name of the first evaluation result type

  • +
  • title_fontsize: (float) Fontsize of the plot title - default: 10

  • +
  • xlabel: (str) - default: ‘X’

  • +
  • xlabel_fontsize: (float) - default: 10

  • +
  • xticks_fontsize: (float) - default: 10

  • +
  • ylabel_fontsize: (float) - default: 10

  • +
  • text_fontsize: (float) - default: 14

  • +
  • tight_layout: (bool) Set matplotlib.figure.tight_layout to remove excess blank space in the plot - default: True

  • +
  • percentile (float) Critial region to shade on histogram - default: 95

  • +
  • bins: (str) - Set binning type. see matplotlib.hist for more info - default: ‘auto’

  • +
  • xy: (list/tuple) - default: (0.55, 0.3)

  • +
+
+
+
+
Returns:
+

can be used to modify the figure

+
+
Return type:
+

ax (matplotlib.axes.Axes)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_poisson_consistency_test.html b/reference/generated/csep.utils.plots.plot_poisson_consistency_test.html new file mode 100644 index 00000000..9c7e9005 --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_poisson_consistency_test.html @@ -0,0 +1,257 @@ + + + + + + + + + csep.utils.plots.plot_poisson_consistency_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_poisson_consistency_test

+
+
+csep.utils.plots.plot_poisson_consistency_test(eval_results, normalize=False, one_sided_lower=False, axes=None, plot_args=None, show=False)[source]
+

Plots results from CSEP1 tests following the CSEP1 convention.

+
+
Note: All of the evaluations should be from the same type of evaluation, otherwise the results will not be

comparable on the same figure.

+
+
+
+
Parameters:
+
    +
  • results (list) – Contains the tests results csep.core.evaluations.EvaluationResult (see note above)

  • +
  • normalize (bool) – select this if the forecast likelihood should be normalized by the observed likelihood. useful +for plotting simulation based simulation tests.

  • +
  • one_sided_lower (bool) – select this if the plot should be for a one sided test

  • +
  • plot_args (dict) – optional argument containing a dictionary of plotting arguments, with keys as strings and items as described below

  • +
+
+
+
+
Optional plotting arguments:
    +
  • figsize: (list/tuple) - default: [6.4, 4.8]

  • +
  • title: (str) - default: name of the first evaluation result type

  • +
  • title_fontsize: (float) Fontsize of the plot title - default: 10

  • +
  • xlabel: (str) - default: ‘X’

  • +
  • xlabel_fontsize: (float) - default: 10

  • +
  • xticks_fontsize: (float) - default: 10

  • +
  • ylabel_fontsize: (float) - default: 10

  • +
  • color: (float/None) If None, sets it to red/green according to _get_marker_style() - default: ‘black’

  • +
  • linewidth: (float) - default: 1.5

  • +
  • capsize: (float) - default: 4

  • +
  • hbars: (bool) Flag to draw horizontal bars for each model - default: True

  • +
  • tight_layout: (bool) Set matplotlib.figure.tight_layout to remove excess blank space in the plot - default: True

  • +
+
+
+
+
Returns:
+

ax (matplotlib.pyplot.axes object)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_spatial_dataset.html b/reference/generated/csep.utils.plots.plot_spatial_dataset.html new file mode 100644 index 00000000..c20f3c37 --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_spatial_dataset.html @@ -0,0 +1,373 @@ + + + + + + + + + csep.utils.plots.plot_spatial_dataset — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_spatial_dataset

+
+
+csep.utils.plots.plot_spatial_dataset(gridded, region, ax=None, show=False, extent=None, set_global=False, plot_args=None)[source]
+

Plot spatial dataset such as data from a gridded forecast

+
+
Parameters:
+
    +
  • gridded (2D numpy.array) – Values according to region,

  • +
  • region (CartesianGrid2D) – Region in which gridded values are contained

  • +
  • show (bool) – Flag if the figure is displayed

  • +
  • extent (list) – default forecast.region.get_bbox()

  • +
  • set_global (bool) – Display the complete globe as basemap

  • +
  • plot_args (dict) –

    matplotlib and cartopy plot arguments. Dict keys are str, whose values can be:

    +
      +
    • +
      figsize:
      +

      tuple/list - default [6.4, 4.8]

      +
      +
      +
    • +
    • +
      title:
      +

      str - default None

      +
      +
      +
    • +
    • +
      title_size:
      +

      int - default 10

      +
      +
      +
    • +
    • +
      filename:
      +

      str - default None

      +
      +
      +
    • +
    • +
      projection:
      +

      cartopy.crs.Projection - default cartopy.crs.PlateCarree

      +
      +
      +
    • +
    • +
      grid:
      +

      bool - default True

      +
      +
      +
    • +
    • +
      grid_labels:
      +

      bool - default True

      +
      +
      +
    • +
    • +
      grid_fontsize:
      +

      float - default 10.0

      +
      +
      +
    • +
    • +
      basemap:
      +

      str. Possible values are: stock_img, stamen_terrain, stamen_terrain-background, google-satellite, ESRI_terrain, ESRI_imagery, ESRI_relief, ESRI_topo, ESRI_terrain, or webservice link. Default is None

      +
      +
      +
    • +
    • +
      coastline:
      +

      bool - Flag to plot coastline. default True,

      +
      +
      +
    • +
    • +
      borders:
      +

      bool - Flag to plot country borders. default False,

      +
      +
      +
    • +
    • +
      region_border:
      +

      bool - Flag to plot the dataset region border. default True,

      +
      +
      +
    • +
    • +
      tile_scaling:
      +

      str/int. Zoom level (1-12) of the basemap tiles. If ‘auto’, is automatically derived from extent

      +
      +
      +
    • +
    • +
      linewidth:
      +

      float - Line width of borders and coast lines. default 1.5,

      +
      +
      +
    • +
    • +
      linecolor:
      +

      str - Color of borders and coast lines. default ‘black’,

      +
      +
      +
    • +
    • +
      cmap:
      +

      str/pyplot.colors.Colormap - default ‘viridis’

      +
      +
      +
    • +
    • +
      clim:
      +

      list - Range of the colorbar. default None

      +
      +
      +
    • +
    • +
      clabel:
      +

      str - Label of the colorbar. default None

      +
      +
      +
    • +
    • +
      clabel_fontsize:
      +

      float - default None

      +
      +
      +
    • +
    • +
      cticks_fontsize:
      +

      float - default None

      +
      +
      +
    • +
    • +
      alpha:
      +

      float - default 1

      +
      +
      +
    • +
    • +
      alpha_exp:
      +

      float - Exponent for the alpha func (recommended between 0.4 and 1). default 0

      +
      +
      +
    • +
    +

  • +
+
+
Returns:
+

matplotlib.pyplot.ax object

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.plots.plot_spatial_test.html b/reference/generated/csep.utils.plots.plot_spatial_test.html new file mode 100644 index 00000000..871309e1 --- /dev/null +++ b/reference/generated/csep.utils.plots.plot_spatial_test.html @@ -0,0 +1,255 @@ + + + + + + + + + csep.utils.plots.plot_spatial_test — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.plots.plot_spatial_test

+
+
+csep.utils.plots.plot_spatial_test(evaluation_result, axes=None, plot_args=None, show=True)[source]
+

Plot spatial test result from catalog based forecast

+
+
Parameters:
+
    +
  • evaluation_result – object-like var that implements the interface of the above EvaluationResult

  • +
  • axes (matplotlib.Axes) – axes object used to chain this plot

  • +
  • show (bool) – if true, call pyplot.show()

  • +
  • plot_args (dict) – optional argument containing a dictionary of plotting arguments, with keys as strings and items as described below

  • +
+
+
+
+
Optional plotting arguments:
    +
  • figsize: (list/tuple) - default: [6.4, 4.8]

  • +
  • title: (str) - default: name of the first evaluation result type

  • +
  • title_fontsize: (float) Fontsize of the plot title - default: 10

  • +
  • xlabel: (str) - default: ‘X’

  • +
  • xlabel_fontsize: (float) - default: 10

  • +
  • xticks_fontsize: (float) - default: 10

  • +
  • ylabel_fontsize: (float) - default: 10

  • +
  • text_fontsize: (float) - default: 14

  • +
  • tight_layout: (bool) Set matplotlib.figure.tight_layout to remove excess blank space in the plot - default: True

  • +
  • percentile (float) Critial region to shade on histogram - default: 95

  • +
  • bins: (str) - Set binning type. see matplotlib.hist for more info - default: ‘auto’

  • +
  • xy: (list/tuple) - default: (0.2, 0.6)

  • +
+
+
+
+
Returns:
+

can be used to modify the figure

+
+
Return type:
+

ax (matplotlib.axes.Axes)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.binned_ecdf.html b/reference/generated/csep.utils.stats.binned_ecdf.html new file mode 100644 index 00000000..1fb2ce75 --- /dev/null +++ b/reference/generated/csep.utils.stats.binned_ecdf.html @@ -0,0 +1,224 @@ + + + + + + + + + csep.utils.stats.binned_ecdf — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.binned_ecdf

+
+
+csep.utils.stats.binned_ecdf(x, vals)[source]
+

returns the statement P(X ≤ x) for val in vals. +vals must be monotonically increasing and unqiue.

+
+
Returns:
+

sorted vals, and ecdf computed at vals

+
+
Return type:
+

tuple

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.cumulative_square_diff.html b/reference/generated/csep.utils.stats.cumulative_square_diff.html new file mode 100644 index 00000000..5067c5fc --- /dev/null +++ b/reference/generated/csep.utils.stats.cumulative_square_diff.html @@ -0,0 +1,233 @@ + + + + + + + + + csep.utils.stats.cumulative_square_diff — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.cumulative_square_diff

+
+
+csep.utils.stats.cumulative_square_diff(cdf1, cdf2)[source]
+

given two cumulative distribution functions, compute the cumulative sq. diff of the set of distances.

+
+

Note

+

this function does not check that the ecdfs are ordered or balanced. beware!

+
+
+
Parameters:
+
    +
  • cdf1 – ndarray

  • +
  • cdf2 – ndarray

  • +
+
+
Returns:
+

scalar distance metric for the histograms

+
+
Return type:
+

cum_dist

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.ecdf.html b/reference/generated/csep.utils.stats.ecdf.html new file mode 100644 index 00000000..e5b0199e --- /dev/null +++ b/reference/generated/csep.utils.stats.ecdf.html @@ -0,0 +1,224 @@ + + + + + + + + + csep.utils.stats.ecdf — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.ecdf

+
+
+csep.utils.stats.ecdf(x)[source]
+

Compute the ecdf of vector x. This does not contain zero, should be equal to 1 in the last value +to satisfy F(x) == P(X ≤ x).

+
+
Parameters:
+

x (numpy.array) – vector of values

+
+
Returns:
+

xs (numpy.array), ys (numpy.array)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.get_quantiles.html b/reference/generated/csep.utils.stats.get_quantiles.html new file mode 100644 index 00000000..a8a7b7d6 --- /dev/null +++ b/reference/generated/csep.utils.stats.get_quantiles.html @@ -0,0 +1,215 @@ + + + + + + + + + csep.utils.stats.get_quantiles — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.get_quantiles

+
+
+csep.utils.stats.get_quantiles(sim_counts, obs_count)[source]
+

Computes delta1 and delta2 quantile scores from empirical distribution and observation

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.greater_equal_ecdf.html b/reference/generated/csep.utils.stats.greater_equal_ecdf.html new file mode 100644 index 00000000..dd00c63b --- /dev/null +++ b/reference/generated/csep.utils.stats.greater_equal_ecdf.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.utils.stats.greater_equal_ecdf — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.greater_equal_ecdf

+
+
+csep.utils.stats.greater_equal_ecdf(x, val, cdf=())[source]
+

Given val return P(x ≥ val).

+
+
Parameters:
+
    +
  • x (numpy.array) – set of values

  • +
  • val (float) – value

  • +
  • ecdf (tuple) – ecdf of x, should be tuple (sorted(x), ecdf(x))

  • +
+
+
Returns:
+

probability that x ≤ val

+
+
Return type:
+

(float)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.less_equal_ecdf.html b/reference/generated/csep.utils.stats.less_equal_ecdf.html new file mode 100644 index 00000000..9f78f847 --- /dev/null +++ b/reference/generated/csep.utils.stats.less_equal_ecdf.html @@ -0,0 +1,229 @@ + + + + + + + + + csep.utils.stats.less_equal_ecdf — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.less_equal_ecdf

+
+
+csep.utils.stats.less_equal_ecdf(x, val, cdf=())[source]
+

Given val return P(x ≤ val).

+
+
Parameters:
+
    +
  • x (numpy.array) – set of values

  • +
  • val (float) – value

  • +
+
+
Returns:
+

probability that x ≤ val

+
+
Return type:
+

(float)

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.max_or_none.html b/reference/generated/csep.utils.stats.max_or_none.html new file mode 100644 index 00000000..d8292f3f --- /dev/null +++ b/reference/generated/csep.utils.stats.max_or_none.html @@ -0,0 +1,215 @@ + + + + + + + + + csep.utils.stats.max_or_none — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.max_or_none

+
+
+csep.utils.stats.max_or_none(x)[source]
+

Given an array x, returns the max value. If x = [], returns None.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.min_or_none.html b/reference/generated/csep.utils.stats.min_or_none.html new file mode 100644 index 00000000..46e1fc08 --- /dev/null +++ b/reference/generated/csep.utils.stats.min_or_none.html @@ -0,0 +1,215 @@ + + + + + + + + + csep.utils.stats.min_or_none — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.min_or_none

+
+
+csep.utils.stats.min_or_none(x)[source]
+

Given an array x, returns the min value. If x = [], returns None.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.poisson_inverse_cdf.html b/reference/generated/csep.utils.stats.poisson_inverse_cdf.html new file mode 100644 index 00000000..1276c574 --- /dev/null +++ b/reference/generated/csep.utils.stats.poisson_inverse_cdf.html @@ -0,0 +1,227 @@ + + + + + + + + + csep.utils.stats.poisson_inverse_cdf — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.poisson_inverse_cdf

+
+
+csep.utils.stats.poisson_inverse_cdf(random_matrix, lam)[source]
+

Wrapper around scipy inverse poisson cdf function

+
+
Parameters:
+
    +
  • random_matrix – Matrix of dimenions equal to forecast, containing random +numbers between 0 and 1.

  • +
  • lam – vector of parameters for poisson distribution

  • +
+
+
Returns:
+

sample from the poisson distribution

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.poisson_joint_log_likelihood_ndarray.html b/reference/generated/csep.utils.stats.poisson_joint_log_likelihood_ndarray.html new file mode 100644 index 00000000..b56438cf --- /dev/null +++ b/reference/generated/csep.utils.stats.poisson_joint_log_likelihood_ndarray.html @@ -0,0 +1,228 @@ + + + + + + + + + csep.utils.stats.poisson_joint_log_likelihood_ndarray — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.poisson_joint_log_likelihood_ndarray

+
+
+csep.utils.stats.poisson_joint_log_likelihood_ndarray(target_event_log_rates, target_observations, n_fore)[source]
+

Efficient calculation of joint log-likelihood of grid-based forecast.

+

Note: log(w!) = 0

+
+
Parameters:
+
    +
  • target_event_log_rates – natural log of bin rates where target events occurred

  • +
  • target_observations – counts of target events

  • +
  • n_fore – expected number from the forecasts

  • +
+
+
Returns:
+

joint_log_likelihood

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.poisson_log_likelihood.html b/reference/generated/csep.utils.stats.poisson_log_likelihood.html new file mode 100644 index 00000000..2a62dba1 --- /dev/null +++ b/reference/generated/csep.utils.stats.poisson_log_likelihood.html @@ -0,0 +1,226 @@ + + + + + + + + + csep.utils.stats.poisson_log_likelihood — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.poisson_log_likelihood

+
+
+csep.utils.stats.poisson_log_likelihood(observation, forecast)[source]
+

Wrapper around scipy to compute the Poisson log-likelihood

+
+
Parameters:
+
    +
  • observation – Observed (Grided) seismicity

  • +
  • forecast – Forecast of a Model (Grided)

  • +
+
+
Returns:
+

Log-Liklihood values of between binned-observations and binned-forecasts

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.sup_dist.html b/reference/generated/csep.utils.stats.sup_dist.html new file mode 100644 index 00000000..df19d8b4 --- /dev/null +++ b/reference/generated/csep.utils.stats.sup_dist.html @@ -0,0 +1,219 @@ + + + + + + + + + csep.utils.stats.sup_dist — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.sup_dist

+
+
+csep.utils.stats.sup_dist(cdf1, cdf2)[source]
+

given two cumulative distribution functions, compute the supremum of the set of absolute distances.

+
+

Note

+

this function does not check that the ecdfs are ordered or balanced. beware!

+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.stats.sup_dist_na.html b/reference/generated/csep.utils.stats.sup_dist_na.html new file mode 100644 index 00000000..266e772f --- /dev/null +++ b/reference/generated/csep.utils.stats.sup_dist_na.html @@ -0,0 +1,230 @@ + + + + + + + + + csep.utils.stats.sup_dist_na — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.stats.sup_dist_na

+
+
+csep.utils.stats.sup_dist_na(data1, data2)[source]
+

computes the ks statistic for two ecdfs that are not necessarily aligned on the same values. performs this +operation by merging the two datasets together. this is taken from the 2sample ks test in the scipy codebase

+
+
Parameters:
+
    +
  • data1 – (numpy array like)

  • +
  • data2 – (numpy array like)

  • +
+
+
Returns:
+

sup dist from the two cdf functions

+
+
Return type:
+

ks

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.time_utils.create_utc_datetime.html b/reference/generated/csep.utils.time_utils.create_utc_datetime.html new file mode 100644 index 00000000..c06689a1 --- /dev/null +++ b/reference/generated/csep.utils.time_utils.create_utc_datetime.html @@ -0,0 +1,213 @@ + + + + + + + + + csep.utils.time_utils.create_utc_datetime — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.time_utils.create_utc_datetime

+
+
+csep.utils.time_utils.create_utc_datetime(datetime)[source]
+

Creates TZAware UTC datetime object from unaware object.

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.time_utils.datetime_to_utc_epoch.html b/reference/generated/csep.utils.time_utils.datetime_to_utc_epoch.html new file mode 100644 index 00000000..bf49cd8d --- /dev/null +++ b/reference/generated/csep.utils.time_utils.datetime_to_utc_epoch.html @@ -0,0 +1,218 @@ + + + + + + + + + csep.utils.time_utils.datetime_to_utc_epoch — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.time_utils.datetime_to_utc_epoch

+
+
+csep.utils.time_utils.datetime_to_utc_epoch(dt)[source]
+

Converts python datetime.datetime into epoch_time in milliseconds.

+
+
Parameters:
+

dt (datetime.datetime) – python datetime object, should be naive.

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.time_utils.days_to_millis.html b/reference/generated/csep.utils.time_utils.days_to_millis.html new file mode 100644 index 00000000..da759b55 --- /dev/null +++ b/reference/generated/csep.utils.time_utils.days_to_millis.html @@ -0,0 +1,213 @@ + + + + + + + + + csep.utils.time_utils.days_to_millis — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.time_utils.days_to_millis

+
+
+csep.utils.time_utils.days_to_millis(days)[source]
+

Converts days to millis

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.time_utils.decimal_year.html b/reference/generated/csep.utils.time_utils.decimal_year.html new file mode 100644 index 00000000..cbaca5a0 --- /dev/null +++ b/reference/generated/csep.utils.time_utils.decimal_year.html @@ -0,0 +1,220 @@ + + + + + + + + + csep.utils.time_utils.decimal_year — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.time_utils.decimal_year

+
+
+csep.utils.time_utils.decimal_year(test_date)[source]
+

Convert given test date to the decimal year representation.

+

Repurposed from CSEP1 Author: Masha Liukis

+
+
+
Args:

test_date (datetime.datetime)

+
+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.time_utils.epoch_time_to_utc_datetime.html b/reference/generated/csep.utils.time_utils.epoch_time_to_utc_datetime.html new file mode 100644 index 00000000..50bd6e0b --- /dev/null +++ b/reference/generated/csep.utils.time_utils.epoch_time_to_utc_datetime.html @@ -0,0 +1,220 @@ + + + + + + + + + csep.utils.time_utils.epoch_time_to_utc_datetime — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.time_utils.epoch_time_to_utc_datetime

+
+
+csep.utils.time_utils.epoch_time_to_utc_datetime(epoch_time_milli)[source]
+

Accepts an epoch_time in milliseconds the UTC timezone and returns a python datetime object.

+

See https://docs.python.org/3/library/datetime.html#datetime.datetime.fromtimestamp for information +about how timezones are handled with this function.

+
+
Parameters:
+

epoch_time (float) – epoch_time in UTC timezone in milliseconds

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.time_utils.millis_to_days.html b/reference/generated/csep.utils.time_utils.millis_to_days.html new file mode 100644 index 00000000..d2a5f5f2 --- /dev/null +++ b/reference/generated/csep.utils.time_utils.millis_to_days.html @@ -0,0 +1,213 @@ + + + + + + + + + csep.utils.time_utils.millis_to_days — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.time_utils.millis_to_days

+
+
+csep.utils.time_utils.millis_to_days(millis)[source]
+

Converts time in millis to days

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.time_utils.strptime_to_utc_datetime.html b/reference/generated/csep.utils.time_utils.strptime_to_utc_datetime.html new file mode 100644 index 00000000..247434a6 --- /dev/null +++ b/reference/generated/csep.utils.time_utils.strptime_to_utc_datetime.html @@ -0,0 +1,231 @@ + + + + + + + + + csep.utils.time_utils.strptime_to_utc_datetime — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.time_utils.strptime_to_utc_datetime

+
+
+csep.utils.time_utils.strptime_to_utc_datetime(time_string, format='%Y-%m-%d %H:%M:%S.%f')[source]
+

Converts time_string with format into time-zone aware datetime object in the UTC timezone.

+
+

Note

+

If the time_string is not in UTC time, it will be converted into UTC timezone.

+
+
+
Parameters:
+
    +
  • time_string (str) – string representation of datetime

  • +
  • format (str) – format of time_string

  • +
+
+
Returns:
+

timezone aware (utc) object from time_string

+
+
Return type:
+

datetime.datetime

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.time_utils.strptime_to_utc_epoch.html b/reference/generated/csep.utils.time_utils.strptime_to_utc_epoch.html new file mode 100644 index 00000000..62de94eb --- /dev/null +++ b/reference/generated/csep.utils.time_utils.strptime_to_utc_epoch.html @@ -0,0 +1,213 @@ + + + + + + + + + csep.utils.time_utils.strptime_to_utc_epoch — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.time_utils.strptime_to_utc_epoch

+
+
+csep.utils.time_utils.strptime_to_utc_epoch(time_string, format='%Y-%m-%d %H:%M:%S.%f')[source]
+

Returns epoch time from formatted time string

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.time_utils.timedelta_from_years.html b/reference/generated/csep.utils.time_utils.timedelta_from_years.html new file mode 100644 index 00000000..344b5ee1 --- /dev/null +++ b/reference/generated/csep.utils.time_utils.timedelta_from_years.html @@ -0,0 +1,218 @@ + + + + + + + + + csep.utils.time_utils.timedelta_from_years — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.time_utils.timedelta_from_years

+
+
+csep.utils.time_utils.timedelta_from_years(time_in_years)[source]
+

Returns python datetime.timedelta object based on the astronomical year in seconds.

+
+
Parameters:
+

time_in_years – positive fraction of years 0 <= time_in_years

+
+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.time_utils.utc_now_datetime.html b/reference/generated/csep.utils.time_utils.utc_now_datetime.html new file mode 100644 index 00000000..fe1d3bdf --- /dev/null +++ b/reference/generated/csep.utils.time_utils.utc_now_datetime.html @@ -0,0 +1,213 @@ + + + + + + + + + csep.utils.time_utils.utc_now_datetime — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.time_utils.utc_now_datetime

+
+
+csep.utils.time_utils.utc_now_datetime()[source]
+

Returns current datetime

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/generated/csep.utils.time_utils.utc_now_epoch.html b/reference/generated/csep.utils.time_utils.utc_now_epoch.html new file mode 100644 index 00000000..4f499f4b --- /dev/null +++ b/reference/generated/csep.utils.time_utils.utc_now_epoch.html @@ -0,0 +1,213 @@ + + + + + + + + + csep.utils.time_utils.utc_now_epoch — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

csep.utils.time_utils.utc_now_epoch

+
+
+csep.utils.time_utils.utc_now_epoch()[source]
+

Returns current epoch time

+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/glossary.html b/reference/glossary.html new file mode 100644 index 00000000..53967020 --- /dev/null +++ b/reference/glossary.html @@ -0,0 +1,223 @@ + + + + + + + + + Terms and Definitions — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Terms and Definitions

+

The page contains terms and their definitions (and possible mathematical definitions) that are commonly used throughout the documentation +and CSEP literature.

+ +
+

Earthquake catalog

+

List of earthquakes (either tectonic or non-tectonic) defined through their location in space, origin time of the event, and +their magnitude.

+
+
+

Earthquake Forecast

+

A probabilistic statement about the occurrence of seismicity that can include information about the magnitude and spatial +location. CSEP supports earthquake forecasts expressed as the expected rate of seismicity in disjoint space and magnitude bins +and as families of synthetic earthquake catalogs.

+
+
+

Stochastic Event Set

+

Collection of synthetic earthquakes (events) that are produced by an earthquake. +A stochastic event set consists of N events that represent a continuous representation of seismicity that can sample +the uncertainty present within in the forecasting model.

+
+
+

Time-dependent Forecast

+

The forecast changes over time using new information not available at the time the forecast was issued. For example, +epidemic-type aftershock sequence models (ETAS) models can utilize updated using newly observed seismicity to produce +new forecasts consistent with the model.

+
+
+

Time-independent Forecast

+

The forecast does not change with time. Time-independent forecasts are generally used for long-term forecasts +needed for probabalistic seismic hazard analysis.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/publications.html b/reference/publications.html new file mode 100644 index 00000000..3abc70d0 --- /dev/null +++ b/reference/publications.html @@ -0,0 +1,189 @@ + + + + + + + + + Referenced Publications — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Referenced Publications

+

Helmstetter, A., Y. Y. Kagan, and D. D. Jackson (2006). Comparison of short-term and time-independent earthquake +forecast models for southern California, Bulletin of the Seismological Society of America 96 90-106.

+

Rhoades, D. A., D. Schorlemmer, M. C. Gerstenberger, A. Christophersen, J. D. Zechar, and M. Imoto (2011). Efficient +testing of earthquake forecasting models, Acta Geophys 59 728-747.

+

Savran, W., M. J. Werner, W. Marzocchi, D. Rhoades, D. D. Jackson, K. R. Milner, E. H. Field, and A. J. Michael (2020). +Pseudoprospective evaluation of UCERF3-ETAS forecasts during the 2019 Ridgecrest Sequence, +Bulletin of the Seismological Society of America.

+

Schorlemmer, D., M. Gerstenberger, S. Wiemer, D. D. Jackson, and D. A. Rhoades (2007). Earthquake likelihood model +testing, Seismological Research Letters 78 17-29.

+

Werner, M. J., A. Helmstetter, D. D. Jackson, and Y. Y. Kagan (2011). High-Resolution Long-Term and Short-Term +Earthquake Forecasts for California, Bulletin of the Seismological Society of America 101 1630-1648.

+

Zechar, J. D., M. C. Gerstenberger, and D. A. Rhoades (2010). Likelihood-Based Tests for Evaluating Space-Rate-Magnitude +Earthquake Forecasts, Bulletin of the Seismological Society of America 100 1184-1195.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/reference/roadmap.html b/reference/roadmap.html new file mode 100644 index 00000000..3b6bc3dc --- /dev/null +++ b/reference/roadmap.html @@ -0,0 +1,190 @@ + + + + + + + + + Development Roadmap — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Development Roadmap

+

This page contains expected changes for new releases of pyCSEP. +Last updated 3 November 2021.

+
+

v0.6.0

+
    +
  1. Include receiver operating characteristic (ROC) curve

  2. +
  3. Kagan I1 score

  4. +
  5. Add function to plot spatial log-likelihood scores

  6. +
  7. Add documentation section to explain maths of CSEP tests

  8. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/search.html b/search.html new file mode 100644 index 00000000..e8a252d3 --- /dev/null +++ b/search.html @@ -0,0 +1,187 @@ + + + + + + + + Search — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + + + +
+ +
+ +
+
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/searchindex.js b/searchindex.js new file mode 100644 index 00000000..6cf9f4a3 --- /dev/null +++ b/searchindex.js @@ -0,0 +1 @@ +Search.setIndex({"alltitles": {"1. Code changes": [[10, "code-changes"]], "2. Creating source distribution": [[10, "creating-source-distribution"]], "3. Create release on Github": [[10, "create-release-on-github"]], "API Reference": [[9, null]], "About": [[8, "about"]], "Accessing Event Information": [[0, "accessing-event-information"]], "Available plots": [[3, "available-plots"]], "Basic types": [[9, "basic-types"]], "Binning Events": [[0, "binning-events"]], "Building mock classes": [[1, "building-mock-classes"]], "CL-test": [[7, "cl-test"]], "Calculate Kagan\u2019s I_1 score": [[152, "calculate-kagan-s-i-1-score"]], "Calculation Utilities": [[9, "calculation-utilities"]], "Cartesian grid": [[4, "cartesian-grid"]], "Catalog as Pandas dataframes": [[0, "catalog-as-pandas-dataframes"]], "Catalog operations": [[9, "catalog-operations"]], "Catalog-based Forecast Evaluation": [[151, null]], "Catalog-based forecast": [[5, "catalog-based-forecast"]], "Catalog-based forecast evaluations": [[1, "catalog-based-forecast-evaluations"]], "Catalog-based forecast tests": [[7, "catalog-based-forecast-tests"]], "Catalog-based forecasts": [[2, "catalog-based-forecasts"]], "Catalogs": [[0, null], [5, "catalogs"], [9, "catalogs"]], "Catalogs operations": [[150, null]], "Comcat Access": [[9, "comcat-access"]], "Comparative tests": [[1, "comparative-tests"]], "Compute Poisson spatial test": [[152, "compute-poisson-spatial-test"]], "Compute Poisson spatial test and Number test": [[156, "compute-poisson-spatial-test-and-number-test"]], "Compute spatial event counts": [[157, "compute-spatial-event-counts"]], "Consistency tests": [[1, "consistency-tests"], [1, "module-csep.core.catalog_evaluations"], [7, "consistency-tests"]], "Contacting Us": [[8, "contacting-us"]], "Contributing": [[8, "contributing"]], "Core Concepts for Beginners": [[5, null]], "Creating a new release of pyCSEP": [[10, "creating-a-new-release-of-pycsep"]], "Creating spatial regions": [[4, "creating-spatial-regions"], [4, "id1"]], "Custom file format": [[2, "custom-file-format"]], "Default file format": [[2, "default-file-format"], [2, "id3"]], "Define Multi-resolution Gridded Region": [[156, "define-multi-resolution-gridded-region"]], "Define Single-resolution Gridded Region": [[156, "define-single-resolution-gridded-region"]], "Define forecast properties": [[152, "define-forecast-properties"], [155, "define-forecast-properties"]], "Define spatial and magnitude regions": [[151, "define-spatial-and-magnitude-regions"], [157, "define-spatial-and-magnitude-regions"]], "Define start and end times of forecast": [[151, "define-start-and-end-times-of-forecast"]], "Developer Notes": [[10, null]], "Developers Installation": [[6, "developers-installation"]], "Development Roadmap": [[149, null]], "Earthquake Forecast": [[147, "earthquake-forecast"]], "Earthquake catalog": [[147, "earthquake-catalog"]], "Evaluations": [[1, null], [5, "evaluations"], [9, "evaluations"]], "Example 1: Spatial dataset plot arguments": [[154, "example-1-spatial-dataset-plot-arguments"]], "Example 2: Plot a global forecast and a selected magnitude bin range": [[154, "example-2-plot-a-global-forecast-and-a-selected-magnitude-bin-range"]], "Example 3: Plot a catalog": [[154, "example-3-plot-a-catalog"]], "Example 4: Plot multiple evaluation results": [[154, "example-4-plot-multiple-evaluation-results"]], "Filter evaluation catalog in space": [[152, "filter-evaluation-catalog-in-space"]], "Filter to desired spatial region": [[150, "filter-to-desired-spatial-region"]], "Filter to desired time interval": [[150, "filter-to-desired-time-interval"]], "Filter to magnitude range": [[150, "filter-to-magnitude-range"]], "Filtering events": [[0, "filtering-events"]], "Filtering events by attribute": [[0, "filtering-events-by-attribute"]], "Filtering events in space": [[0, "filtering-events-in-space"]], "Forecast comparison tests": [[7, "forecast-comparison-tests"]], "Forecasts": [[2, null], [5, "forecasts"], [9, "forecasts"]], "Grid loading from already created quadkeys": [[4, "grid-loading-from-already-created-quadkeys"]], "Grid-based Forecast Evaluation": [[152, null]], "Grid-based Forecast Tests": [[7, "grid-based-forecast-tests"]], "Grid-based forecast": [[5, "grid-based-forecast"]], "Gridded forecasts": [[2, "gridded-forecasts"]], "Gridded-forecast evaluations": [[1, "gridded-forecast-evaluations"]], "Including custom event metadata": [[0, "including-custom-event-metadata"]], "Installing from Source": [[6, "installing-from-source"]], "Installing pyCSEP": [[6, null]], "Introduction": [[0, "introduction"], [3, "introduction"]], "Likelihood-test (L-test)": [[7, "likelihood-test-l-test"]], "List of Contributors": [[8, "list-of-contributors"]], "Load Training Catalog for Multi-resolution grid": [[156, "load-training-catalog-for-multi-resolution-grid"]], "Load catalog": [[150, "load-catalog"]], "Load catalog forecast": [[151, "load-catalog-forecast"]], "Load catalogs from ComCat": [[0, "load-catalogs-from-comcat"]], "Load catalogs from files": [[0, "load-catalogs-from-files"]], "Load data forecast": [[157, "load-data-forecast"]], "Load evaluation catalog": [[152, "load-evaluation-catalog"], [156, "load-evaluation-catalog"]], "Load forecast": [[152, "load-forecast"], [155, "load-forecast"]], "Load forecast of multi-resolution grid": [[156, "load-forecast-of-multi-resolution-grid"]], "Load forecast of single-resolution grid": [[156, "load-forecast-of-single-resolution-grid"]], "Load required libraries": [[150, "load-required-libraries"], [151, "load-required-libraries"], [152, "load-required-libraries"], [155, "load-required-libraries"], [156, "load-required-libraries"], [157, "load-required-libraries"]], "Loading catalogs": [[0, "loading-catalogs"]], "Loading catalogs and forecasts": [[9, "loading-catalogs-and-forecasts"]], "M-test": [[7, "m-test"]], "Magnitude Test": [[7, "magnitude-test"]], "Multi-resolution grid based on earthquake catalog": [[4, "multi-resolution-grid-based-on-earthquake-catalog"]], "N-test": [[7, "n-test"]], "Number Test": [[7, "number-test"]], "Obtain evaluation catalog from ComCat": [[151, "obtain-evaluation-catalog-from-comcat"]], "Perform number test": [[151, "perform-number-test"]], "Plot ROC Curves": [[152, "plot-roc-curves"]], "Plot arguments": [[3, "plot-arguments"]], "Plot customizations": [[154, null]], "Plot expected event counts": [[157, "plot-expected-event-counts"]], "Plot forecast": [[155, "plot-forecast"]], "Plot number test result": [[151, "plot-number-test-result"]], "Plot spatial test results": [[152, "plot-spatial-test-results"], [156, "plot-spatial-test-results"]], "Plots": [[3, null]], "Plotting": [[9, "module-csep.utils.plots"]], "Plotting gridded forecast": [[155, null]], "Preparing evaluation catalog": [[1, "preparing-evaluation-catalog"]], "Project Goals": [[8, "project-goals"]], "Pseudo-likelihood test": [[7, "pseudo-likelihood-test"]], "Publication reference": [[1, "publication-reference"]], "Publication references": [[1, "publication-references"]], "PyCSEP catalog basics": [[0, "pycsep-catalog-basics"]], "Quadtree Grid-based Forecast Evaluation": [[156, null]], "Quadtree grid": [[4, "quadtree-grid"]], "Quick sanity check": [[157, "quick-sanity-check"]], "Referenced Publications": [[148, null]], "References": [[7, "references"]], "Region Utilities": [[4, "region-utilities"]], "Regions": [[4, null], [9, "regions"]], "S-test": [[7, "s-test"]], "Single-resolution grid": [[4, "single-resolution-grid"]], "Spatial test": [[7, "spatial-test"]], "Statistics Utilities": [[9, "statistics-utilities"]], "Stochastic Event Set": [[147, "stochastic-event-set"]], "Store evaluation results": [[152, "store-evaluation-results"]], "Table of Contents": [[0, "table-of-contents"], [1, "table-of-contents"], [2, "table-of-contents"], [3, "table-of-contents"], [4, "table-of-contents"], [147, "table-of-contents"]], "Terms and Definitions": [[147, null]], "Testing Regions": [[4, "testing-regions"], [4, "id2"]], "Theory of CSEP Tests": [[7, null]], "Time Utilities": [[9, "time-utilities"]], "Time-dependent Forecast": [[147, "time-dependent-forecast"]], "Time-dependent magnitude of completeness": [[0, "time-dependent-magnitude-of-completeness"]], "Time-independent Forecast": [[147, "time-independent-forecast"]], "Tutorials": [[153, null]], "Using Conda": [[6, "using-conda"], [6, "id1"]], "Using Pip": [[6, "using-pip"]], "Using Pip / Virtualenv": [[6, "using-pip-virtualenv"]], "Working with catalog-based forecasts": [[2, "working-with-catalog-based-forecasts"], [157, null]], "Working with conventional gridded forecasts": [[2, "working-with-conventional-gridded-forecasts"]], "Working with quadtree-gridded forecasts": [[2, "working-with-quadtree-gridded-forecasts"]], "Write catalog": [[150, "write-catalog"]], "Writing custom loader functions": [[0, "writing-custom-loader-functions"]], "csep.core.catalog_evaluations.calibration_test": [[11, null]], "csep.core.catalog_evaluations.magnitude_test": [[12, null]], "csep.core.catalog_evaluations.number_test": [[13, null]], "csep.core.catalog_evaluations.pseudolikelihood_test": [[14, null]], "csep.core.catalog_evaluations.spatial_test": [[15, null]], "csep.core.catalogs.AbstractBaseCatalog": [[16, null]], "csep.core.catalogs.CSEPCatalog": [[17, null]], "csep.core.catalogs.CSEPCatalog.apply_mct": [[18, null]], "csep.core.catalogs.CSEPCatalog.event_count": [[19, null]], "csep.core.catalogs.CSEPCatalog.filter": [[20, null]], "csep.core.catalogs.CSEPCatalog.filter_spatial": [[21, null]], "csep.core.catalogs.CSEPCatalog.from_dataframe": [[22, null]], "csep.core.catalogs.CSEPCatalog.from_dict": [[23, null]], "csep.core.catalogs.CSEPCatalog.get_bvalue": [[24, null]], "csep.core.catalogs.CSEPCatalog.get_csep_format": [[25, null]], "csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events": [[26, null]], "csep.core.catalogs.CSEPCatalog.get_datetimes": [[27, null]], "csep.core.catalogs.CSEPCatalog.get_depths": [[28, null]], "csep.core.catalogs.CSEPCatalog.get_epoch_times": [[29, null]], "csep.core.catalogs.CSEPCatalog.get_latitudes": [[30, null]], "csep.core.catalogs.CSEPCatalog.get_longitudes": [[31, null]], "csep.core.catalogs.CSEPCatalog.get_magnitudes": [[32, null]], "csep.core.catalogs.CSEPCatalog.length_in_seconds": [[33, null]], "csep.core.catalogs.CSEPCatalog.load_ascii_catalogs": [[34, null]], "csep.core.catalogs.CSEPCatalog.load_catalog": [[35, null]], "csep.core.catalogs.CSEPCatalog.load_json": [[36, null]], "csep.core.catalogs.CSEPCatalog.magnitude_counts": [[37, null]], "csep.core.catalogs.CSEPCatalog.plot": [[38, null]], "csep.core.catalogs.CSEPCatalog.spatial_counts": [[39, null]], "csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts": [[40, null]], "csep.core.catalogs.CSEPCatalog.to_dataframe": [[41, null]], "csep.core.catalogs.CSEPCatalog.to_dict": [[42, null]], "csep.core.catalogs.CSEPCatalog.update_catalog_stats": [[43, null]], "csep.core.catalogs.CSEPCatalog.write_ascii": [[44, null]], "csep.core.catalogs.CSEPCatalog.write_json": [[45, null]], "csep.core.catalogs.UCERF3Catalog": [[46, null]], "csep.core.forecasts.CatalogForecast": [[47, null]], "csep.core.forecasts.CatalogForecast.get_dataframe": [[48, null]], "csep.core.forecasts.CatalogForecast.get_expected_rates": [[49, null]], "csep.core.forecasts.CatalogForecast.load_ascii": [[50, null]], "csep.core.forecasts.CatalogForecast.magnitude_counts": [[51, null]], "csep.core.forecasts.CatalogForecast.magnitudes": [[52, null]], "csep.core.forecasts.CatalogForecast.min_magnitude": [[53, null]], "csep.core.forecasts.CatalogForecast.spatial_counts": [[54, null]], "csep.core.forecasts.CatalogForecast.write_ascii": [[55, null]], "csep.core.forecasts.GriddedForecast": [[56, null]], "csep.core.forecasts.GriddedForecast.data": [[57, null]], "csep.core.forecasts.GriddedForecast.event_count": [[58, null]], "csep.core.forecasts.GriddedForecast.from_custom": [[59, null]], "csep.core.forecasts.GriddedForecast.get_index_of": [[60, null]], "csep.core.forecasts.GriddedForecast.get_latitudes": [[61, null]], "csep.core.forecasts.GriddedForecast.get_longitudes": [[62, null]], "csep.core.forecasts.GriddedForecast.get_magnitude_index": [[63, null]], "csep.core.forecasts.GriddedForecast.get_magnitudes": [[64, null]], "csep.core.forecasts.GriddedForecast.get_rates": [[65, null]], "csep.core.forecasts.GriddedForecast.load_ascii": [[66, null]], "csep.core.forecasts.GriddedForecast.magnitude_counts": [[67, null]], "csep.core.forecasts.GriddedForecast.magnitudes": [[68, null]], "csep.core.forecasts.GriddedForecast.min_magnitude": [[69, null]], "csep.core.forecasts.GriddedForecast.plot": [[70, null]], "csep.core.forecasts.GriddedForecast.scale_to_test_date": [[71, null]], "csep.core.forecasts.GriddedForecast.spatial_counts": [[72, null]], "csep.core.forecasts.GriddedForecast.sum": [[73, null]], "csep.core.forecasts.GriddedForecast.target_event_rates": [[74, null]], "csep.core.poisson_evaluations.conditional_likelihood_test": [[75, null]], "csep.core.poisson_evaluations.likelihood_test": [[76, null]], "csep.core.poisson_evaluations.magnitude_test": [[77, null]], "csep.core.poisson_evaluations.number_test": [[78, null]], "csep.core.poisson_evaluations.paired_t_test": [[79, null]], "csep.core.poisson_evaluations.spatial_test": [[80, null]], "csep.core.poisson_evaluations.w_test": [[81, null]], "csep.core.regions.CartesianGrid2D": [[82, null]], "csep.core.regions.california_relm_region": [[83, null]], "csep.core.regions.create_space_magnitude_region": [[84, null]], "csep.core.regions.generate_aftershock_region": [[85, null]], "csep.core.regions.global_region": [[86, null]], "csep.core.regions.increase_grid_resolution": [[87, null]], "csep.core.regions.italy_csep_region": [[88, null]], "csep.core.regions.magnitude_bins": [[89, null]], "csep.core.regions.masked_region": [[90, null]], "csep.core.regions.parse_csep_template": [[91, null]], "csep.load_catalog": [[92, null]], "csep.load_catalog_forecast": [[93, null]], "csep.load_gridded_forecast": [[94, null]], "csep.load_stochastic_event_sets": [[95, null]], "csep.query_bsi": [[96, null]], "csep.query_comcat": [[97, null]], "csep.utils.basic_types.AdaptiveHistogram": [[98, null]], "csep.utils.calc.bin1d_vec": [[99, null]], "csep.utils.calc.discretize": [[100, null]], "csep.utils.calc.find_nearest": [[101, null]], "csep.utils.calc.func_inverse": [[102, null]], "csep.utils.calc.nearest_index": [[103, null]], "csep.utils.comcat.get_event_by_id": [[104, null]], "csep.utils.comcat.search": [[105, null]], "csep.utils.plots.add_labels_for_publication": [[106, null]], "csep.utils.plots.plot_basemap": [[107, null]], "csep.utils.plots.plot_calibration_test": [[108, null]], "csep.utils.plots.plot_catalog": [[109, null]], "csep.utils.plots.plot_comparison_test": [[110, null]], "csep.utils.plots.plot_cumulative_events_versus_time": [[111, null]], "csep.utils.plots.plot_distribution_test": [[112, null]], "csep.utils.plots.plot_ecdf": [[113, null]], "csep.utils.plots.plot_histogram": [[114, null]], "csep.utils.plots.plot_likelihood_test": [[115, null]], "csep.utils.plots.plot_magnitude_histogram": [[116, null]], "csep.utils.plots.plot_magnitude_test": [[117, null]], "csep.utils.plots.plot_magnitude_versus_time": [[118, null]], "csep.utils.plots.plot_number_test": [[119, null]], "csep.utils.plots.plot_poisson_consistency_test": [[120, null]], "csep.utils.plots.plot_spatial_dataset": [[121, null]], "csep.utils.plots.plot_spatial_test": [[122, null]], "csep.utils.stats.binned_ecdf": [[123, null]], "csep.utils.stats.cumulative_square_diff": [[124, null]], "csep.utils.stats.ecdf": [[125, null]], "csep.utils.stats.get_quantiles": [[126, null]], "csep.utils.stats.greater_equal_ecdf": [[127, null]], "csep.utils.stats.less_equal_ecdf": [[128, null]], "csep.utils.stats.max_or_none": [[129, null]], "csep.utils.stats.min_or_none": [[130, null]], "csep.utils.stats.poisson_inverse_cdf": [[131, null]], "csep.utils.stats.poisson_joint_log_likelihood_ndarray": [[132, null]], "csep.utils.stats.poisson_log_likelihood": [[133, null]], "csep.utils.stats.sup_dist": [[134, null]], "csep.utils.stats.sup_dist_na": [[135, null]], "csep.utils.time_utils.create_utc_datetime": [[136, null]], "csep.utils.time_utils.datetime_to_utc_epoch": [[137, null]], "csep.utils.time_utils.days_to_millis": [[138, null]], "csep.utils.time_utils.decimal_year": [[139, null]], "csep.utils.time_utils.epoch_time_to_utc_datetime": [[140, null]], "csep.utils.time_utils.millis_to_days": [[141, null]], "csep.utils.time_utils.strptime_to_utc_datetime": [[142, null]], "csep.utils.time_utils.strptime_to_utc_epoch": [[143, null]], "csep.utils.time_utils.timedelta_from_years": [[144, null]], "csep.utils.time_utils.utc_now_datetime": [[145, null]], "csep.utils.time_utils.utc_now_epoch": [[146, null]], "pyCSEP: Tools for Earthquake Forecast Developers": [[8, null]], "v0.6.0": [[149, "v0-6-0"]]}, "docnames": ["concepts/catalogs", "concepts/evaluations", "concepts/forecasts", "concepts/plots", "concepts/regions", "getting_started/core_concepts", "getting_started/installing", "getting_started/theory", "index", "reference/api_reference", "reference/developer_notes", "reference/generated/csep.core.catalog_evaluations.calibration_test", "reference/generated/csep.core.catalog_evaluations.magnitude_test", "reference/generated/csep.core.catalog_evaluations.number_test", "reference/generated/csep.core.catalog_evaluations.pseudolikelihood_test", "reference/generated/csep.core.catalog_evaluations.spatial_test", "reference/generated/csep.core.catalogs.AbstractBaseCatalog", "reference/generated/csep.core.catalogs.CSEPCatalog", "reference/generated/csep.core.catalogs.CSEPCatalog.apply_mct", "reference/generated/csep.core.catalogs.CSEPCatalog.event_count", "reference/generated/csep.core.catalogs.CSEPCatalog.filter", "reference/generated/csep.core.catalogs.CSEPCatalog.filter_spatial", "reference/generated/csep.core.catalogs.CSEPCatalog.from_dataframe", "reference/generated/csep.core.catalogs.CSEPCatalog.from_dict", "reference/generated/csep.core.catalogs.CSEPCatalog.get_bvalue", "reference/generated/csep.core.catalogs.CSEPCatalog.get_csep_format", "reference/generated/csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events", "reference/generated/csep.core.catalogs.CSEPCatalog.get_datetimes", "reference/generated/csep.core.catalogs.CSEPCatalog.get_depths", "reference/generated/csep.core.catalogs.CSEPCatalog.get_epoch_times", "reference/generated/csep.core.catalogs.CSEPCatalog.get_latitudes", "reference/generated/csep.core.catalogs.CSEPCatalog.get_longitudes", "reference/generated/csep.core.catalogs.CSEPCatalog.get_magnitudes", "reference/generated/csep.core.catalogs.CSEPCatalog.length_in_seconds", "reference/generated/csep.core.catalogs.CSEPCatalog.load_ascii_catalogs", "reference/generated/csep.core.catalogs.CSEPCatalog.load_catalog", "reference/generated/csep.core.catalogs.CSEPCatalog.load_json", "reference/generated/csep.core.catalogs.CSEPCatalog.magnitude_counts", "reference/generated/csep.core.catalogs.CSEPCatalog.plot", "reference/generated/csep.core.catalogs.CSEPCatalog.spatial_counts", "reference/generated/csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts", "reference/generated/csep.core.catalogs.CSEPCatalog.to_dataframe", "reference/generated/csep.core.catalogs.CSEPCatalog.to_dict", "reference/generated/csep.core.catalogs.CSEPCatalog.update_catalog_stats", "reference/generated/csep.core.catalogs.CSEPCatalog.write_ascii", "reference/generated/csep.core.catalogs.CSEPCatalog.write_json", "reference/generated/csep.core.catalogs.UCERF3Catalog", "reference/generated/csep.core.forecasts.CatalogForecast", "reference/generated/csep.core.forecasts.CatalogForecast.get_dataframe", "reference/generated/csep.core.forecasts.CatalogForecast.get_expected_rates", "reference/generated/csep.core.forecasts.CatalogForecast.load_ascii", "reference/generated/csep.core.forecasts.CatalogForecast.magnitude_counts", "reference/generated/csep.core.forecasts.CatalogForecast.magnitudes", "reference/generated/csep.core.forecasts.CatalogForecast.min_magnitude", "reference/generated/csep.core.forecasts.CatalogForecast.spatial_counts", "reference/generated/csep.core.forecasts.CatalogForecast.write_ascii", "reference/generated/csep.core.forecasts.GriddedForecast", "reference/generated/csep.core.forecasts.GriddedForecast.data", "reference/generated/csep.core.forecasts.GriddedForecast.event_count", "reference/generated/csep.core.forecasts.GriddedForecast.from_custom", "reference/generated/csep.core.forecasts.GriddedForecast.get_index_of", "reference/generated/csep.core.forecasts.GriddedForecast.get_latitudes", "reference/generated/csep.core.forecasts.GriddedForecast.get_longitudes", "reference/generated/csep.core.forecasts.GriddedForecast.get_magnitude_index", "reference/generated/csep.core.forecasts.GriddedForecast.get_magnitudes", "reference/generated/csep.core.forecasts.GriddedForecast.get_rates", "reference/generated/csep.core.forecasts.GriddedForecast.load_ascii", "reference/generated/csep.core.forecasts.GriddedForecast.magnitude_counts", "reference/generated/csep.core.forecasts.GriddedForecast.magnitudes", "reference/generated/csep.core.forecasts.GriddedForecast.min_magnitude", "reference/generated/csep.core.forecasts.GriddedForecast.plot", "reference/generated/csep.core.forecasts.GriddedForecast.scale_to_test_date", "reference/generated/csep.core.forecasts.GriddedForecast.spatial_counts", "reference/generated/csep.core.forecasts.GriddedForecast.sum", "reference/generated/csep.core.forecasts.GriddedForecast.target_event_rates", "reference/generated/csep.core.poisson_evaluations.conditional_likelihood_test", "reference/generated/csep.core.poisson_evaluations.likelihood_test", "reference/generated/csep.core.poisson_evaluations.magnitude_test", "reference/generated/csep.core.poisson_evaluations.number_test", "reference/generated/csep.core.poisson_evaluations.paired_t_test", "reference/generated/csep.core.poisson_evaluations.spatial_test", "reference/generated/csep.core.poisson_evaluations.w_test", "reference/generated/csep.core.regions.CartesianGrid2D", "reference/generated/csep.core.regions.california_relm_region", "reference/generated/csep.core.regions.create_space_magnitude_region", "reference/generated/csep.core.regions.generate_aftershock_region", "reference/generated/csep.core.regions.global_region", "reference/generated/csep.core.regions.increase_grid_resolution", "reference/generated/csep.core.regions.italy_csep_region", "reference/generated/csep.core.regions.magnitude_bins", "reference/generated/csep.core.regions.masked_region", "reference/generated/csep.core.regions.parse_csep_template", "reference/generated/csep.load_catalog", "reference/generated/csep.load_catalog_forecast", "reference/generated/csep.load_gridded_forecast", "reference/generated/csep.load_stochastic_event_sets", "reference/generated/csep.query_bsi", "reference/generated/csep.query_comcat", "reference/generated/csep.utils.basic_types.AdaptiveHistogram", "reference/generated/csep.utils.calc.bin1d_vec", "reference/generated/csep.utils.calc.discretize", "reference/generated/csep.utils.calc.find_nearest", "reference/generated/csep.utils.calc.func_inverse", "reference/generated/csep.utils.calc.nearest_index", "reference/generated/csep.utils.comcat.get_event_by_id", "reference/generated/csep.utils.comcat.search", "reference/generated/csep.utils.plots.add_labels_for_publication", "reference/generated/csep.utils.plots.plot_basemap", "reference/generated/csep.utils.plots.plot_calibration_test", "reference/generated/csep.utils.plots.plot_catalog", "reference/generated/csep.utils.plots.plot_comparison_test", "reference/generated/csep.utils.plots.plot_cumulative_events_versus_time", "reference/generated/csep.utils.plots.plot_distribution_test", "reference/generated/csep.utils.plots.plot_ecdf", "reference/generated/csep.utils.plots.plot_histogram", "reference/generated/csep.utils.plots.plot_likelihood_test", "reference/generated/csep.utils.plots.plot_magnitude_histogram", "reference/generated/csep.utils.plots.plot_magnitude_test", "reference/generated/csep.utils.plots.plot_magnitude_versus_time", "reference/generated/csep.utils.plots.plot_number_test", "reference/generated/csep.utils.plots.plot_poisson_consistency_test", "reference/generated/csep.utils.plots.plot_spatial_dataset", "reference/generated/csep.utils.plots.plot_spatial_test", "reference/generated/csep.utils.stats.binned_ecdf", "reference/generated/csep.utils.stats.cumulative_square_diff", "reference/generated/csep.utils.stats.ecdf", "reference/generated/csep.utils.stats.get_quantiles", "reference/generated/csep.utils.stats.greater_equal_ecdf", "reference/generated/csep.utils.stats.less_equal_ecdf", "reference/generated/csep.utils.stats.max_or_none", "reference/generated/csep.utils.stats.min_or_none", "reference/generated/csep.utils.stats.poisson_inverse_cdf", "reference/generated/csep.utils.stats.poisson_joint_log_likelihood_ndarray", "reference/generated/csep.utils.stats.poisson_log_likelihood", "reference/generated/csep.utils.stats.sup_dist", "reference/generated/csep.utils.stats.sup_dist_na", "reference/generated/csep.utils.time_utils.create_utc_datetime", "reference/generated/csep.utils.time_utils.datetime_to_utc_epoch", "reference/generated/csep.utils.time_utils.days_to_millis", "reference/generated/csep.utils.time_utils.decimal_year", "reference/generated/csep.utils.time_utils.epoch_time_to_utc_datetime", "reference/generated/csep.utils.time_utils.millis_to_days", "reference/generated/csep.utils.time_utils.strptime_to_utc_datetime", "reference/generated/csep.utils.time_utils.strptime_to_utc_epoch", "reference/generated/csep.utils.time_utils.timedelta_from_years", "reference/generated/csep.utils.time_utils.utc_now_datetime", "reference/generated/csep.utils.time_utils.utc_now_epoch", "reference/glossary", "reference/publications", "reference/roadmap", "tutorials/catalog_filtering", "tutorials/catalog_forecast_evaluation", "tutorials/gridded_forecast_evaluation", "tutorials/index", "tutorials/plot_customizations", "tutorials/plot_gridded_forecast", "tutorials/quadtree_gridded_forecast_evaluation", "tutorials/working_with_catalog_forecasts"], "envversion": {"sphinx": 62, "sphinx.domains.c": 3, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 9, "sphinx.domains.index": 1, "sphinx.domains.javascript": 3, "sphinx.domains.math": 2, "sphinx.domains.python": 4, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx.ext.intersphinx": 1, "sphinx.ext.todo": 2, "sphinx.ext.viewcode": 1}, "filenames": ["concepts/catalogs.rst", "concepts/evaluations.rst", "concepts/forecasts.rst", "concepts/plots.rst", "concepts/regions.rst", "getting_started/core_concepts.rst", "getting_started/installing.rst", "getting_started/theory.rst", "index.rst", "reference/api_reference.rst", "reference/developer_notes.rst", "reference/generated/csep.core.catalog_evaluations.calibration_test.rst", "reference/generated/csep.core.catalog_evaluations.magnitude_test.rst", "reference/generated/csep.core.catalog_evaluations.number_test.rst", "reference/generated/csep.core.catalog_evaluations.pseudolikelihood_test.rst", "reference/generated/csep.core.catalog_evaluations.spatial_test.rst", "reference/generated/csep.core.catalogs.AbstractBaseCatalog.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.apply_mct.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.event_count.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.filter.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.filter_spatial.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.from_dataframe.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.from_dict.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.get_bvalue.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.get_csep_format.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.get_datetimes.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.get_depths.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.get_epoch_times.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.get_latitudes.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.get_longitudes.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.get_magnitudes.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.length_in_seconds.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.load_ascii_catalogs.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.load_catalog.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.load_json.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.magnitude_counts.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.plot.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.spatial_counts.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.to_dataframe.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.to_dict.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.update_catalog_stats.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.write_ascii.rst", "reference/generated/csep.core.catalogs.CSEPCatalog.write_json.rst", "reference/generated/csep.core.catalogs.UCERF3Catalog.rst", "reference/generated/csep.core.forecasts.CatalogForecast.rst", "reference/generated/csep.core.forecasts.CatalogForecast.get_dataframe.rst", "reference/generated/csep.core.forecasts.CatalogForecast.get_expected_rates.rst", "reference/generated/csep.core.forecasts.CatalogForecast.load_ascii.rst", "reference/generated/csep.core.forecasts.CatalogForecast.magnitude_counts.rst", "reference/generated/csep.core.forecasts.CatalogForecast.magnitudes.rst", "reference/generated/csep.core.forecasts.CatalogForecast.min_magnitude.rst", "reference/generated/csep.core.forecasts.CatalogForecast.spatial_counts.rst", "reference/generated/csep.core.forecasts.CatalogForecast.write_ascii.rst", "reference/generated/csep.core.forecasts.GriddedForecast.rst", "reference/generated/csep.core.forecasts.GriddedForecast.data.rst", "reference/generated/csep.core.forecasts.GriddedForecast.event_count.rst", "reference/generated/csep.core.forecasts.GriddedForecast.from_custom.rst", "reference/generated/csep.core.forecasts.GriddedForecast.get_index_of.rst", "reference/generated/csep.core.forecasts.GriddedForecast.get_latitudes.rst", "reference/generated/csep.core.forecasts.GriddedForecast.get_longitudes.rst", "reference/generated/csep.core.forecasts.GriddedForecast.get_magnitude_index.rst", "reference/generated/csep.core.forecasts.GriddedForecast.get_magnitudes.rst", "reference/generated/csep.core.forecasts.GriddedForecast.get_rates.rst", "reference/generated/csep.core.forecasts.GriddedForecast.load_ascii.rst", "reference/generated/csep.core.forecasts.GriddedForecast.magnitude_counts.rst", "reference/generated/csep.core.forecasts.GriddedForecast.magnitudes.rst", "reference/generated/csep.core.forecasts.GriddedForecast.min_magnitude.rst", "reference/generated/csep.core.forecasts.GriddedForecast.plot.rst", "reference/generated/csep.core.forecasts.GriddedForecast.scale_to_test_date.rst", "reference/generated/csep.core.forecasts.GriddedForecast.spatial_counts.rst", "reference/generated/csep.core.forecasts.GriddedForecast.sum.rst", "reference/generated/csep.core.forecasts.GriddedForecast.target_event_rates.rst", "reference/generated/csep.core.poisson_evaluations.conditional_likelihood_test.rst", "reference/generated/csep.core.poisson_evaluations.likelihood_test.rst", "reference/generated/csep.core.poisson_evaluations.magnitude_test.rst", "reference/generated/csep.core.poisson_evaluations.number_test.rst", "reference/generated/csep.core.poisson_evaluations.paired_t_test.rst", "reference/generated/csep.core.poisson_evaluations.spatial_test.rst", "reference/generated/csep.core.poisson_evaluations.w_test.rst", "reference/generated/csep.core.regions.CartesianGrid2D.rst", "reference/generated/csep.core.regions.california_relm_region.rst", "reference/generated/csep.core.regions.create_space_magnitude_region.rst", "reference/generated/csep.core.regions.generate_aftershock_region.rst", "reference/generated/csep.core.regions.global_region.rst", "reference/generated/csep.core.regions.increase_grid_resolution.rst", "reference/generated/csep.core.regions.italy_csep_region.rst", "reference/generated/csep.core.regions.magnitude_bins.rst", "reference/generated/csep.core.regions.masked_region.rst", "reference/generated/csep.core.regions.parse_csep_template.rst", "reference/generated/csep.load_catalog.rst", "reference/generated/csep.load_catalog_forecast.rst", "reference/generated/csep.load_gridded_forecast.rst", "reference/generated/csep.load_stochastic_event_sets.rst", "reference/generated/csep.query_bsi.rst", "reference/generated/csep.query_comcat.rst", "reference/generated/csep.utils.basic_types.AdaptiveHistogram.rst", "reference/generated/csep.utils.calc.bin1d_vec.rst", "reference/generated/csep.utils.calc.discretize.rst", "reference/generated/csep.utils.calc.find_nearest.rst", "reference/generated/csep.utils.calc.func_inverse.rst", "reference/generated/csep.utils.calc.nearest_index.rst", "reference/generated/csep.utils.comcat.get_event_by_id.rst", "reference/generated/csep.utils.comcat.search.rst", "reference/generated/csep.utils.plots.add_labels_for_publication.rst", "reference/generated/csep.utils.plots.plot_basemap.rst", "reference/generated/csep.utils.plots.plot_calibration_test.rst", "reference/generated/csep.utils.plots.plot_catalog.rst", "reference/generated/csep.utils.plots.plot_comparison_test.rst", "reference/generated/csep.utils.plots.plot_cumulative_events_versus_time.rst", "reference/generated/csep.utils.plots.plot_distribution_test.rst", "reference/generated/csep.utils.plots.plot_ecdf.rst", "reference/generated/csep.utils.plots.plot_histogram.rst", "reference/generated/csep.utils.plots.plot_likelihood_test.rst", "reference/generated/csep.utils.plots.plot_magnitude_histogram.rst", "reference/generated/csep.utils.plots.plot_magnitude_test.rst", "reference/generated/csep.utils.plots.plot_magnitude_versus_time.rst", "reference/generated/csep.utils.plots.plot_number_test.rst", "reference/generated/csep.utils.plots.plot_poisson_consistency_test.rst", "reference/generated/csep.utils.plots.plot_spatial_dataset.rst", "reference/generated/csep.utils.plots.plot_spatial_test.rst", "reference/generated/csep.utils.stats.binned_ecdf.rst", "reference/generated/csep.utils.stats.cumulative_square_diff.rst", "reference/generated/csep.utils.stats.ecdf.rst", "reference/generated/csep.utils.stats.get_quantiles.rst", "reference/generated/csep.utils.stats.greater_equal_ecdf.rst", "reference/generated/csep.utils.stats.less_equal_ecdf.rst", "reference/generated/csep.utils.stats.max_or_none.rst", "reference/generated/csep.utils.stats.min_or_none.rst", "reference/generated/csep.utils.stats.poisson_inverse_cdf.rst", "reference/generated/csep.utils.stats.poisson_joint_log_likelihood_ndarray.rst", "reference/generated/csep.utils.stats.poisson_log_likelihood.rst", "reference/generated/csep.utils.stats.sup_dist.rst", "reference/generated/csep.utils.stats.sup_dist_na.rst", "reference/generated/csep.utils.time_utils.create_utc_datetime.rst", "reference/generated/csep.utils.time_utils.datetime_to_utc_epoch.rst", "reference/generated/csep.utils.time_utils.days_to_millis.rst", "reference/generated/csep.utils.time_utils.decimal_year.rst", "reference/generated/csep.utils.time_utils.epoch_time_to_utc_datetime.rst", "reference/generated/csep.utils.time_utils.millis_to_days.rst", "reference/generated/csep.utils.time_utils.strptime_to_utc_datetime.rst", "reference/generated/csep.utils.time_utils.strptime_to_utc_epoch.rst", "reference/generated/csep.utils.time_utils.timedelta_from_years.rst", "reference/generated/csep.utils.time_utils.utc_now_datetime.rst", "reference/generated/csep.utils.time_utils.utc_now_epoch.rst", "reference/glossary.rst", "reference/publications.rst", "reference/roadmap.rst", "tutorials/catalog_filtering.rst", "tutorials/catalog_forecast_evaluation.rst", "tutorials/gridded_forecast_evaluation.rst", "tutorials/index.rst", "tutorials/plot_customizations.rst", "tutorials/plot_gridded_forecast.rst", "tutorials/quadtree_gridded_forecast_evaluation.rst", "tutorials/working_with_catalog_forecasts.rst"], "indexentries": {"__init__() (csep.core.catalogs.abstractbasecatalog method)": [[16, "csep.core.catalogs.AbstractBaseCatalog.__init__", false]], "__init__() (csep.core.catalogs.csepcatalog method)": [[17, "csep.core.catalogs.CSEPCatalog.__init__", false]], "__init__() (csep.core.catalogs.ucerf3catalog method)": [[46, "csep.core.catalogs.UCERF3Catalog.__init__", false]], "__init__() (csep.core.forecasts.catalogforecast method)": [[47, "csep.core.forecasts.CatalogForecast.__init__", false]], "__init__() (csep.core.forecasts.griddedforecast method)": [[56, "csep.core.forecasts.GriddedForecast.__init__", false]], "__init__() (csep.core.regions.cartesiangrid2d method)": [[82, "csep.core.regions.CartesianGrid2D.__init__", false]], "__init__() (csep.utils.basic_types.adaptivehistogram method)": [[98, "csep.utils.basic_types.AdaptiveHistogram.__init__", false]], "abstractbasecatalog (class in csep.core.catalogs)": [[16, "csep.core.catalogs.AbstractBaseCatalog", false]], "adaptivehistogram (class in csep.utils.basic_types)": [[98, "csep.utils.basic_types.AdaptiveHistogram", false]], "add_labels_for_publication() (in module csep.utils.plots)": [[106, "csep.utils.plots.add_labels_for_publication", false]], "apply_mct() (csep.core.catalogs.csepcatalog method)": [[18, "csep.core.catalogs.CSEPCatalog.apply_mct", false]], "bin1d_vec() (in module csep.utils.calc)": [[99, "csep.utils.calc.bin1d_vec", false]], "binned_ecdf() (in module csep.utils.stats)": [[123, "csep.utils.stats.binned_ecdf", false]], "calibration_test() (in module csep.core.catalog_evaluations)": [[11, "csep.core.catalog_evaluations.calibration_test", false]], "california_relm_region() (in module csep.core.regions)": [[83, "csep.core.regions.california_relm_region", false]], "cartesiangrid2d (class in csep.core.regions)": [[82, "csep.core.regions.CartesianGrid2D", false]], "catalogforecast (class in csep.core.forecasts)": [[47, "csep.core.forecasts.CatalogForecast", false]], "conditional_likelihood_test() (in module csep.core.poisson_evaluations)": [[75, "csep.core.poisson_evaluations.conditional_likelihood_test", false]], "create_space_magnitude_region() (in module csep.core.regions)": [[84, "csep.core.regions.create_space_magnitude_region", false]], "create_utc_datetime() (in module csep.utils.time_utils)": [[136, "csep.utils.time_utils.create_utc_datetime", false]], "csep": [[9, "module-csep", false]], "csep.core.catalog_evaluations": [[1, "module-csep.core.catalog_evaluations", false], [9, "module-csep.core.catalog_evaluations", false]], "csep.core.catalogs": [[0, "module-csep.core.catalogs", false], [9, "module-csep.core.catalogs", false]], "csep.core.forecasts": [[9, "module-csep.core.forecasts", false]], "csep.core.poisson_evaluations": [[1, "module-csep.core.poisson_evaluations", false], [9, "module-csep.core.poisson_evaluations", false]], "csep.core.regions": [[4, "module-csep.core.regions", false], [9, "module-csep.core.regions", false]], "csep.utils.basic_types": [[4, "module-csep.utils.basic_types", false], [9, "module-csep.utils.basic_types", false]], "csep.utils.calc": [[9, "module-csep.utils.calc", false]], "csep.utils.comcat": [[9, "module-csep.utils.comcat", false]], "csep.utils.plots": [[9, "module-csep.utils.plots", false]], "csep.utils.stats": [[9, "module-csep.utils.stats", false]], "csep.utils.time_utils": [[9, "module-csep.utils.time_utils", false]], "csepcatalog (class in csep.core.catalogs)": [[17, "csep.core.catalogs.CSEPCatalog", false]], "cumulative_square_diff() (in module csep.utils.stats)": [[124, "csep.utils.stats.cumulative_square_diff", false]], "data (csep.core.forecasts.griddedforecast property)": [[57, "csep.core.forecasts.GriddedForecast.data", false]], "datetime_to_utc_epoch() (in module csep.utils.time_utils)": [[137, "csep.utils.time_utils.datetime_to_utc_epoch", false]], "days_to_millis() (in module csep.utils.time_utils)": [[138, "csep.utils.time_utils.days_to_millis", false]], "decimal_year() (in module csep.utils.time_utils)": [[139, "csep.utils.time_utils.decimal_year", false]], "discretize() (in module csep.utils.calc)": [[100, "csep.utils.calc.discretize", false]], "ecdf() (in module csep.utils.stats)": [[125, "csep.utils.stats.ecdf", false]], "epoch_time_to_utc_datetime() (in module csep.utils.time_utils)": [[140, "csep.utils.time_utils.epoch_time_to_utc_datetime", false]], "event_count (csep.core.catalogs.csepcatalog property)": [[19, "csep.core.catalogs.CSEPCatalog.event_count", false]], "event_count (csep.core.forecasts.griddedforecast property)": [[58, "csep.core.forecasts.GriddedForecast.event_count", false]], "filter() (csep.core.catalogs.csepcatalog method)": [[20, "csep.core.catalogs.CSEPCatalog.filter", false]], "filter_spatial() (csep.core.catalogs.csepcatalog method)": [[21, "csep.core.catalogs.CSEPCatalog.filter_spatial", false]], "find_nearest() (in module csep.utils.calc)": [[101, "csep.utils.calc.find_nearest", false]], "from_custom() (csep.core.forecasts.griddedforecast class method)": [[2, "csep.core.forecasts.GriddedForecast.from_custom", false], [59, "csep.core.forecasts.GriddedForecast.from_custom", false]], "from_dataframe() (csep.core.catalogs.csepcatalog class method)": [[22, "csep.core.catalogs.CSEPCatalog.from_dataframe", false]], "from_dict() (csep.core.catalogs.csepcatalog class method)": [[23, "csep.core.catalogs.CSEPCatalog.from_dict", false]], "func_inverse() (in module csep.utils.calc)": [[102, "csep.utils.calc.func_inverse", false]], "generate_aftershock_region() (in module csep.core.regions)": [[85, "csep.core.regions.generate_aftershock_region", false]], "get_bvalue() (csep.core.catalogs.csepcatalog method)": [[24, "csep.core.catalogs.CSEPCatalog.get_bvalue", false]], "get_csep_format() (csep.core.catalogs.csepcatalog method)": [[25, "csep.core.catalogs.CSEPCatalog.get_csep_format", false]], "get_cumulative_number_of_events() (csep.core.catalogs.csepcatalog method)": [[26, "csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events", false]], "get_dataframe() (csep.core.forecasts.catalogforecast method)": [[48, "csep.core.forecasts.CatalogForecast.get_dataframe", false]], "get_datetimes() (csep.core.catalogs.csepcatalog method)": [[27, "csep.core.catalogs.CSEPCatalog.get_datetimes", false]], "get_depths() (csep.core.catalogs.csepcatalog method)": [[28, "csep.core.catalogs.CSEPCatalog.get_depths", false]], "get_epoch_times() (csep.core.catalogs.csepcatalog method)": [[29, "csep.core.catalogs.CSEPCatalog.get_epoch_times", false]], "get_event_by_id() (in module csep.utils.comcat)": [[104, "csep.utils.comcat.get_event_by_id", false]], "get_expected_rates() (csep.core.forecasts.catalogforecast method)": [[49, "csep.core.forecasts.CatalogForecast.get_expected_rates", false]], "get_index_of() (csep.core.forecasts.griddedforecast method)": [[60, "csep.core.forecasts.GriddedForecast.get_index_of", false]], "get_latitudes() (csep.core.catalogs.csepcatalog method)": [[30, "csep.core.catalogs.CSEPCatalog.get_latitudes", false]], "get_latitudes() (csep.core.forecasts.griddedforecast method)": [[61, "csep.core.forecasts.GriddedForecast.get_latitudes", false]], "get_longitudes() (csep.core.catalogs.csepcatalog method)": [[31, "csep.core.catalogs.CSEPCatalog.get_longitudes", false]], "get_longitudes() (csep.core.forecasts.griddedforecast method)": [[62, "csep.core.forecasts.GriddedForecast.get_longitudes", false]], "get_magnitude_index() (csep.core.forecasts.griddedforecast method)": [[63, "csep.core.forecasts.GriddedForecast.get_magnitude_index", false]], "get_magnitudes() (csep.core.catalogs.csepcatalog method)": [[32, "csep.core.catalogs.CSEPCatalog.get_magnitudes", false]], "get_magnitudes() (csep.core.forecasts.griddedforecast method)": [[64, "csep.core.forecasts.GriddedForecast.get_magnitudes", false]], "get_quantiles() (in module csep.utils.stats)": [[126, "csep.utils.stats.get_quantiles", false]], "get_rates() (csep.core.forecasts.griddedforecast method)": [[65, "csep.core.forecasts.GriddedForecast.get_rates", false]], "global_region() (in module csep.core.regions)": [[86, "csep.core.regions.global_region", false]], "greater_equal_ecdf() (in module csep.utils.stats)": [[127, "csep.utils.stats.greater_equal_ecdf", false]], "griddedforecast (class in csep.core.forecasts)": [[56, "csep.core.forecasts.GriddedForecast", false]], "increase_grid_resolution() (in module csep.core.regions)": [[87, "csep.core.regions.increase_grid_resolution", false]], "italy_csep_region() (in module csep.core.regions)": [[88, "csep.core.regions.italy_csep_region", false]], "length_in_seconds() (csep.core.catalogs.csepcatalog method)": [[33, "csep.core.catalogs.CSEPCatalog.length_in_seconds", false]], "less_equal_ecdf() (in module csep.utils.stats)": [[128, "csep.utils.stats.less_equal_ecdf", false]], "likelihood_test() (in module csep.core.poisson_evaluations)": [[76, "csep.core.poisson_evaluations.likelihood_test", false]], "load_ascii() (csep.core.forecasts.catalogforecast class method)": [[50, "csep.core.forecasts.CatalogForecast.load_ascii", false]], "load_ascii() (csep.core.forecasts.griddedforecast class method)": [[66, "csep.core.forecasts.GriddedForecast.load_ascii", false]], "load_ascii_catalogs() (csep.core.catalogs.csepcatalog class method)": [[34, "csep.core.catalogs.CSEPCatalog.load_ascii_catalogs", false]], "load_catalog() (csep.core.catalogs.csepcatalog class method)": [[35, "csep.core.catalogs.CSEPCatalog.load_catalog", false]], "load_catalog() (in module csep)": [[92, "csep.load_catalog", false]], "load_catalog_forecast() (in module csep)": [[93, "csep.load_catalog_forecast", false]], "load_gridded_forecast() (in module csep)": [[94, "csep.load_gridded_forecast", false]], "load_json() (csep.core.catalogs.csepcatalog class method)": [[36, "csep.core.catalogs.CSEPCatalog.load_json", false]], "load_stochastic_event_sets() (in module csep)": [[95, "csep.load_stochastic_event_sets", false]], "magnitude_bins() (in module csep.core.regions)": [[89, "csep.core.regions.magnitude_bins", false]], "magnitude_counts() (csep.core.catalogs.csepcatalog method)": [[37, "csep.core.catalogs.CSEPCatalog.magnitude_counts", false]], "magnitude_counts() (csep.core.forecasts.catalogforecast method)": [[51, "csep.core.forecasts.CatalogForecast.magnitude_counts", false]], "magnitude_counts() (csep.core.forecasts.griddedforecast method)": [[67, "csep.core.forecasts.GriddedForecast.magnitude_counts", false]], "magnitude_test() (in module csep.core.catalog_evaluations)": [[12, "csep.core.catalog_evaluations.magnitude_test", false]], "magnitude_test() (in module csep.core.poisson_evaluations)": [[77, "csep.core.poisson_evaluations.magnitude_test", false]], "magnitudes (csep.core.forecasts.catalogforecast property)": [[52, "csep.core.forecasts.CatalogForecast.magnitudes", false]], "magnitudes (csep.core.forecasts.griddedforecast property)": [[68, "csep.core.forecasts.GriddedForecast.magnitudes", false]], "masked_region() (in module csep.core.regions)": [[90, "csep.core.regions.masked_region", false]], "max_or_none() (in module csep.utils.stats)": [[129, "csep.utils.stats.max_or_none", false]], "millis_to_days() (in module csep.utils.time_utils)": [[141, "csep.utils.time_utils.millis_to_days", false]], "min_magnitude (csep.core.forecasts.catalogforecast property)": [[53, "csep.core.forecasts.CatalogForecast.min_magnitude", false]], "min_magnitude (csep.core.forecasts.griddedforecast property)": [[69, "csep.core.forecasts.GriddedForecast.min_magnitude", false]], "min_or_none() (in module csep.utils.stats)": [[130, "csep.utils.stats.min_or_none", false]], "module": [[0, "module-csep.core.catalogs", false], [1, "module-csep.core.catalog_evaluations", false], [1, "module-csep.core.poisson_evaluations", false], [4, "module-csep.core.regions", false], [4, "module-csep.utils.basic_types", false], [9, "module-csep", false], [9, "module-csep.core.catalog_evaluations", false], [9, "module-csep.core.catalogs", false], [9, "module-csep.core.forecasts", false], [9, "module-csep.core.poisson_evaluations", false], [9, "module-csep.core.regions", false], [9, "module-csep.utils.basic_types", false], [9, "module-csep.utils.calc", false], [9, "module-csep.utils.comcat", false], [9, "module-csep.utils.plots", false], [9, "module-csep.utils.stats", false], [9, "module-csep.utils.time_utils", false]], "nearest_index() (in module csep.utils.calc)": [[103, "csep.utils.calc.nearest_index", false]], "number_test() (in module csep.core.catalog_evaluations)": [[13, "csep.core.catalog_evaluations.number_test", false]], "number_test() (in module csep.core.poisson_evaluations)": [[78, "csep.core.poisson_evaluations.number_test", false]], "paired_t_test() (in module csep.core.poisson_evaluations)": [[79, "csep.core.poisson_evaluations.paired_t_test", false]], "parse_csep_template() (in module csep.core.regions)": [[91, "csep.core.regions.parse_csep_template", false]], "plot() (csep.core.catalogs.csepcatalog method)": [[38, "csep.core.catalogs.CSEPCatalog.plot", false]], "plot() (csep.core.forecasts.griddedforecast method)": [[70, "csep.core.forecasts.GriddedForecast.plot", false]], "plot_basemap() (in module csep.utils.plots)": [[107, "csep.utils.plots.plot_basemap", false]], "plot_calibration_test() (in module csep.utils.plots)": [[108, "csep.utils.plots.plot_calibration_test", false]], "plot_catalog() (in module csep.utils.plots)": [[109, "csep.utils.plots.plot_catalog", false]], "plot_comparison_test() (in module csep.utils.plots)": [[110, "csep.utils.plots.plot_comparison_test", false]], "plot_cumulative_events_versus_time() (in module csep.utils.plots)": [[111, "csep.utils.plots.plot_cumulative_events_versus_time", false]], "plot_distribution_test() (in module csep.utils.plots)": [[112, "csep.utils.plots.plot_distribution_test", false]], "plot_ecdf() (in module csep.utils.plots)": [[113, "csep.utils.plots.plot_ecdf", false]], "plot_histogram() (in module csep.utils.plots)": [[114, "csep.utils.plots.plot_histogram", false]], "plot_likelihood_test() (in module csep.utils.plots)": [[115, "csep.utils.plots.plot_likelihood_test", false]], "plot_magnitude_histogram() (in module csep.utils.plots)": [[116, "csep.utils.plots.plot_magnitude_histogram", false]], "plot_magnitude_test() (in module csep.utils.plots)": [[117, "csep.utils.plots.plot_magnitude_test", false]], "plot_magnitude_versus_time() (in module csep.utils.plots)": [[118, "csep.utils.plots.plot_magnitude_versus_time", false]], "plot_number_test() (in module csep.utils.plots)": [[119, "csep.utils.plots.plot_number_test", false]], "plot_poisson_consistency_test() (in module csep.utils.plots)": [[120, "csep.utils.plots.plot_poisson_consistency_test", false]], "plot_spatial_dataset() (in module csep.utils.plots)": [[121, "csep.utils.plots.plot_spatial_dataset", false]], "plot_spatial_test() (in module csep.utils.plots)": [[122, "csep.utils.plots.plot_spatial_test", false]], "poisson_inverse_cdf() (in module csep.utils.stats)": [[131, "csep.utils.stats.poisson_inverse_cdf", false]], "poisson_joint_log_likelihood_ndarray() (in module csep.utils.stats)": [[132, "csep.utils.stats.poisson_joint_log_likelihood_ndarray", false]], "poisson_log_likelihood() (in module csep.utils.stats)": [[133, "csep.utils.stats.poisson_log_likelihood", false]], "pseudolikelihood_test() (in module csep.core.catalog_evaluations)": [[14, "csep.core.catalog_evaluations.pseudolikelihood_test", false]], "query_bsi() (in module csep)": [[96, "csep.query_bsi", false]], "query_comcat() (in module csep)": [[97, "csep.query_comcat", false]], "scale_to_test_date() (csep.core.forecasts.griddedforecast method)": [[71, "csep.core.forecasts.GriddedForecast.scale_to_test_date", false]], "search() (in module csep.utils.comcat)": [[105, "csep.utils.comcat.search", false]], "spatial_counts() (csep.core.catalogs.csepcatalog method)": [[39, "csep.core.catalogs.CSEPCatalog.spatial_counts", false]], "spatial_counts() (csep.core.forecasts.catalogforecast method)": [[54, "csep.core.forecasts.CatalogForecast.spatial_counts", false]], "spatial_counts() (csep.core.forecasts.griddedforecast method)": [[72, "csep.core.forecasts.GriddedForecast.spatial_counts", false]], "spatial_magnitude_counts() (csep.core.catalogs.csepcatalog method)": [[40, "csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts", false]], "spatial_test() (in module csep.core.catalog_evaluations)": [[15, "csep.core.catalog_evaluations.spatial_test", false]], "spatial_test() (in module csep.core.poisson_evaluations)": [[80, "csep.core.poisson_evaluations.spatial_test", false]], "strptime_to_utc_datetime() (in module csep.utils.time_utils)": [[142, "csep.utils.time_utils.strptime_to_utc_datetime", false]], "strptime_to_utc_epoch() (in module csep.utils.time_utils)": [[143, "csep.utils.time_utils.strptime_to_utc_epoch", false]], "sum() (csep.core.forecasts.griddedforecast method)": [[73, "csep.core.forecasts.GriddedForecast.sum", false]], "sup_dist() (in module csep.utils.stats)": [[134, "csep.utils.stats.sup_dist", false]], "sup_dist_na() (in module csep.utils.stats)": [[135, "csep.utils.stats.sup_dist_na", false]], "target_event_rates() (csep.core.forecasts.griddedforecast method)": [[74, "csep.core.forecasts.GriddedForecast.target_event_rates", false]], "timedelta_from_years() (in module csep.utils.time_utils)": [[144, "csep.utils.time_utils.timedelta_from_years", false]], "to_dataframe() (csep.core.catalogs.csepcatalog method)": [[41, "csep.core.catalogs.CSEPCatalog.to_dataframe", false]], "to_dict() (csep.core.catalogs.csepcatalog method)": [[42, "csep.core.catalogs.CSEPCatalog.to_dict", false]], "ucerf3catalog (class in csep.core.catalogs)": [[46, "csep.core.catalogs.UCERF3Catalog", false]], "update_catalog_stats() (csep.core.catalogs.csepcatalog method)": [[43, "csep.core.catalogs.CSEPCatalog.update_catalog_stats", false]], "utc_now_datetime() (in module csep.utils.time_utils)": [[145, "csep.utils.time_utils.utc_now_datetime", false]], "utc_now_epoch() (in module csep.utils.time_utils)": [[146, "csep.utils.time_utils.utc_now_epoch", false]], "w_test() (in module csep.core.poisson_evaluations)": [[81, "csep.core.poisson_evaluations.w_test", false]], "write_ascii() (csep.core.catalogs.csepcatalog method)": [[44, "csep.core.catalogs.CSEPCatalog.write_ascii", false]], "write_ascii() (csep.core.forecasts.catalogforecast method)": [[55, "csep.core.forecasts.CatalogForecast.write_ascii", false]], "write_json() (csep.core.catalogs.csepcatalog method)": [[45, "csep.core.catalogs.CSEPCatalog.write_json", false]]}, "objects": {"": [[9, 0, 0, "-", "csep"]], "csep": [[92, 1, 1, "", "load_catalog"], [93, 1, 1, "", "load_catalog_forecast"], [94, 1, 1, "", "load_gridded_forecast"], [95, 1, 1, "", "load_stochastic_event_sets"], [96, 1, 1, "", "query_bsi"], [97, 1, 1, "", "query_comcat"]], "csep.core": [[9, 0, 0, "-", "catalog_evaluations"], [9, 0, 0, "-", "catalogs"], [9, 0, 0, "-", "forecasts"], [9, 0, 0, "-", "poisson_evaluations"], [9, 0, 0, "-", "regions"]], "csep.core.catalog_evaluations": [[11, 1, 1, "", "calibration_test"], [12, 1, 1, "", "magnitude_test"], [13, 1, 1, "", "number_test"], [14, 1, 1, "", "pseudolikelihood_test"], [15, 1, 1, "", "spatial_test"]], "csep.core.catalogs": [[16, 2, 1, "", "AbstractBaseCatalog"], [17, 2, 1, "", "CSEPCatalog"], [46, 2, 1, "", "UCERF3Catalog"]], "csep.core.catalogs.AbstractBaseCatalog": [[16, 3, 1, "", "__init__"]], "csep.core.catalogs.CSEPCatalog": [[17, 3, 1, "", "__init__"], [18, 3, 1, "", "apply_mct"], [19, 4, 1, "", "event_count"], [20, 3, 1, "", "filter"], [21, 3, 1, "", "filter_spatial"], [22, 3, 1, "", "from_dataframe"], [23, 3, 1, "", "from_dict"], [24, 3, 1, "", "get_bvalue"], [25, 3, 1, "", "get_csep_format"], [26, 3, 1, "", "get_cumulative_number_of_events"], [27, 3, 1, "", "get_datetimes"], [28, 3, 1, "", "get_depths"], [29, 3, 1, "", "get_epoch_times"], [30, 3, 1, "", "get_latitudes"], [31, 3, 1, "", "get_longitudes"], [32, 3, 1, "", "get_magnitudes"], [33, 3, 1, "", "length_in_seconds"], [34, 3, 1, "", "load_ascii_catalogs"], [35, 3, 1, "", "load_catalog"], [36, 3, 1, "", "load_json"], [37, 3, 1, "", "magnitude_counts"], [38, 3, 1, "", "plot"], [39, 3, 1, "", "spatial_counts"], [40, 3, 1, "", "spatial_magnitude_counts"], [41, 3, 1, "", "to_dataframe"], [42, 3, 1, "", "to_dict"], [43, 3, 1, "", "update_catalog_stats"], [44, 3, 1, "", "write_ascii"], [45, 3, 1, "", "write_json"]], "csep.core.catalogs.UCERF3Catalog": [[46, 3, 1, "", "__init__"]], "csep.core.forecasts": [[47, 2, 1, "", "CatalogForecast"], [56, 2, 1, "", "GriddedForecast"]], "csep.core.forecasts.CatalogForecast": [[47, 3, 1, "", "__init__"], [48, 3, 1, "", "get_dataframe"], [49, 3, 1, "", "get_expected_rates"], [50, 3, 1, "", "load_ascii"], [51, 3, 1, "", "magnitude_counts"], [52, 4, 1, "", "magnitudes"], [53, 4, 1, "", "min_magnitude"], [54, 3, 1, "", "spatial_counts"], [55, 3, 1, "", "write_ascii"]], "csep.core.forecasts.GriddedForecast": [[56, 3, 1, "", "__init__"], [57, 4, 1, "", "data"], [58, 4, 1, "", "event_count"], [59, 3, 1, "", "from_custom"], [60, 3, 1, "", "get_index_of"], [61, 3, 1, "", "get_latitudes"], [62, 3, 1, "", "get_longitudes"], [63, 3, 1, "", "get_magnitude_index"], [64, 3, 1, "", "get_magnitudes"], [65, 3, 1, "", "get_rates"], [66, 3, 1, "", "load_ascii"], [67, 3, 1, "", "magnitude_counts"], [68, 4, 1, "", "magnitudes"], [69, 4, 1, "", "min_magnitude"], [70, 3, 1, "", "plot"], [71, 3, 1, "", "scale_to_test_date"], [72, 3, 1, "", "spatial_counts"], [73, 3, 1, "", "sum"], [74, 3, 1, "", "target_event_rates"]], "csep.core.poisson_evaluations": [[75, 1, 1, "", "conditional_likelihood_test"], [76, 1, 1, "", "likelihood_test"], [77, 1, 1, "", "magnitude_test"], [78, 1, 1, "", "number_test"], [79, 1, 1, "", "paired_t_test"], [80, 1, 1, "", "spatial_test"], [81, 1, 1, "", "w_test"]], "csep.core.regions": [[82, 2, 1, "", "CartesianGrid2D"], [83, 1, 1, "", "california_relm_region"], [84, 1, 1, "", "create_space_magnitude_region"], [85, 1, 1, "", "generate_aftershock_region"], [86, 1, 1, "", "global_region"], [87, 1, 1, "", "increase_grid_resolution"], [88, 1, 1, "", "italy_csep_region"], [89, 1, 1, "", "magnitude_bins"], [90, 1, 1, "", "masked_region"], [91, 1, 1, "", "parse_csep_template"]], "csep.core.regions.CartesianGrid2D": [[82, 3, 1, "", "__init__"]], "csep.utils": [[9, 0, 0, "-", "basic_types"], [9, 0, 0, "-", "calc"], [9, 0, 0, "-", "comcat"], [9, 0, 0, "-", "plots"], [9, 0, 0, "-", "stats"], [9, 0, 0, "-", "time_utils"]], "csep.utils.basic_types": [[98, 2, 1, "", "AdaptiveHistogram"]], "csep.utils.basic_types.AdaptiveHistogram": [[98, 3, 1, "", "__init__"]], "csep.utils.calc": [[99, 1, 1, "", "bin1d_vec"], [100, 1, 1, "", "discretize"], [101, 1, 1, "", "find_nearest"], [102, 1, 1, "", "func_inverse"], [103, 1, 1, "", "nearest_index"]], "csep.utils.comcat": [[104, 1, 1, "", "get_event_by_id"], [105, 1, 1, "", "search"]], "csep.utils.plots": [[106, 1, 1, "", "add_labels_for_publication"], [107, 1, 1, "", "plot_basemap"], [108, 1, 1, "", "plot_calibration_test"], [109, 1, 1, "", "plot_catalog"], [110, 1, 1, "", "plot_comparison_test"], [111, 1, 1, "", "plot_cumulative_events_versus_time"], [112, 1, 1, "", "plot_distribution_test"], [113, 1, 1, "", "plot_ecdf"], [114, 1, 1, "", "plot_histogram"], [115, 1, 1, "", "plot_likelihood_test"], [116, 1, 1, "", "plot_magnitude_histogram"], [117, 1, 1, "", "plot_magnitude_test"], [118, 1, 1, "", "plot_magnitude_versus_time"], [119, 1, 1, "", "plot_number_test"], [120, 1, 1, "", "plot_poisson_consistency_test"], [121, 1, 1, "", "plot_spatial_dataset"], [122, 1, 1, "", "plot_spatial_test"]], "csep.utils.stats": [[123, 1, 1, "", "binned_ecdf"], [124, 1, 1, "", "cumulative_square_diff"], [125, 1, 1, "", "ecdf"], [126, 1, 1, "", "get_quantiles"], [127, 1, 1, "", "greater_equal_ecdf"], [128, 1, 1, "", "less_equal_ecdf"], [129, 1, 1, "", "max_or_none"], [130, 1, 1, "", "min_or_none"], [131, 1, 1, "", "poisson_inverse_cdf"], [132, 1, 1, "", "poisson_joint_log_likelihood_ndarray"], [133, 1, 1, "", "poisson_log_likelihood"], [134, 1, 1, "", "sup_dist"], [135, 1, 1, "", "sup_dist_na"]], "csep.utils.time_utils": [[136, 1, 1, "", "create_utc_datetime"], [137, 1, 1, "", "datetime_to_utc_epoch"], [138, 1, 1, "", "days_to_millis"], [139, 1, 1, "", "decimal_year"], [140, 1, 1, "", "epoch_time_to_utc_datetime"], [141, 1, 1, "", "millis_to_days"], [142, 1, 1, "", "strptime_to_utc_datetime"], [143, 1, 1, "", "strptime_to_utc_epoch"], [144, 1, 1, "", "timedelta_from_years"], [145, 1, 1, "", "utc_now_datetime"], [146, 1, 1, "", "utc_now_epoch"]]}, "objnames": {"0": ["py", "module", "Python module"], "1": ["py", "function", "Python function"], "2": ["py", "class", "Python class"], "3": ["py", "method", "Python method"], "4": ["py", "property", "Python property"]}, "objtypes": {"0": "py:module", "1": "py:function", "2": "py:class", "3": "py:method", "4": "py:property"}, "terms": {"": [0, 5, 10, 44, 99, 142, 143, 148], "0": [0, 1, 2, 4, 5, 6, 7, 20, 79, 86, 88, 96, 97, 98, 100, 105, 107, 109, 115, 117, 119, 121, 122, 131, 132, 144, 150, 151, 152, 154, 155, 156, 157], "00": [7, 150, 151, 152, 154, 155, 156], "000": [7, 105, 157], "001": 157, "0012161731719970703": 151, "0016498565673828125": 151, "002": 157, "0020482540130615234": 151, "0023813247680664062": 151, "002633333206176758": 151, "0029969215393066406": 151, "003": 157, "003269672393798828": 151, "003643035888671875": 151, "004": [7, 157], "004563808441162109": 151, "004910945892333984": 151, "005": 157, "006": 157, "0066": 7, "007": 157, "008": 157, "0081": 7, "008572578430175781": 151, "01": [2, 7, 150, 154, 156], "010": 7, "01001": 2, "0120090192": 7, "01243734359741211": 151, "014": 157, "0161667": 152, "017767667770385742": 151, "01999664": 156, "02": [7, 152], "021": 157, "0220180031": 7, "022856950759887695": 151, "0241667": 150, "025": 7, "026311874389648438": 151, "028": 157, "029581785202026367": 151, "03": 150, "03316640853881836": 151, "035": 157, "036905527114868164": 151, "04": 2, "040000": 150, "04017376899719238": 151, "042": 157, "048": 157, "0481667": [7, 152], "05": [1, 2, 4, 7, 37, 40, 63, 79, 109], "054": 157, "05t0": 2, "06": [7, 150, 151], "060769557952881": 151, "061": 157, "067": [7, 151, 157], "068": 154, "07": [7, 150, 151], "070000": 7, "07233262062072754": 151, "08": [7, 150], "080000": 150, "08350563": 156, "09": 150, "1": [0, 1, 2, 4, 5, 6, 7, 8, 78, 79, 83, 86, 88, 97, 98, 99, 100, 104, 105, 107, 109, 120, 121, 125, 131, 150, 151, 156, 157], "10": [7, 105, 109, 115, 117, 119, 120, 121, 122, 150, 151, 154, 157], "100": [7, 105, 148, 151, 157], "1000": [75, 76, 77, 80, 96, 97, 105, 151, 157], "10000": [151, 157], "100000": 7, "1001667": 150, "1002": 7, "1007": 7, "101": [7, 148], "1024": 4, "1049": 7, "106": [7, 148], "107": 7, "1081": 7, "10814189910888672": 151, "11": [4, 7, 150, 151, 152, 155, 156], "113": [97, 150], "11363": 150, "114": 152, "1141372763753": 156, "115": [7, 150, 152], "1155": 152, "116": [7, 151, 156], "117": 150, "118": [7, 151], "1184": [7, 148], "1191667": 150, "1195": [7, 148], "12": [4, 7, 107, 109, 121, 150, 151, 152, 155], "1229": 7, "123456": 7, "1244": 150, "125": [2, 7, 97, 150, 152], "1251": 7, "1253": 7, "1261": 7, "129": 157, "13": 150, "130000": 150, "1371": 150, "14": [7, 115, 119, 122, 150, 151], "14277887344360352": 151, "1431667": 7, "1444": 152, "1464571952819824": 151, "15": [18, 150, 154], "150024414": 156, "1511": 7, "1514": 7, "1519": 7, "16": 106, "16000366": 156, "162": 7, "1630": [7, 148], "1648": [7, 148], "167": 7, "17": [7, 148, 152], "177": 152, "1785": 7, "179": 156, "18": [7, 150, 151, 152, 154], "180": [154, 156], "1823333": 150, "18568954606255": 156, "18568954606256": 156, "1857488155365": 154, "19": [7, 150, 151, 152], "194": 157, "1976": 156, "1992": [2, 7, 151], "1995": [20, 154], "1d": [2, 40, 99, 154], "1e": [37, 40, 63], "1f": 154, "1x0": 156, "2": [0, 2, 4, 5, 7, 8, 18, 78, 83, 87, 88, 96, 97, 122, 150, 151, 152, 156, 157], "20": [7, 105, 150, 151, 152, 154, 157], "200": [151, 157], "2000": [151, 157], "20000": 105, "2003": 24, "2005": 7, "2006": [0, 7, 18, 148, 152, 155], "2007": [1, 7, 148, 152], "2009": 152, "2010": [1, 7, 148, 154], "2010a": 7, "2010b": 7, "2011": [1, 7, 74, 148, 152, 155], "2011a": 7, "2011b": 7, "2013": 156, "2014": [7, 156], "2015": [7, 154], "2017": 7, "2018": 7, "2019": [7, 148, 150, 156], "2020": [1, 7, 14, 148], "2021": 149, "2022": 10, "2024": 150, "205": 157, "20580387115478516": 151, "21": [6, 96, 150], "215": 151, "215977907180786": 150, "218679904937744": 151, "22": [6, 150, 154], "227": 157, "23": [150, 156], "234": 150, "23948097229003906": 151, "24": [7, 151], "25": [10, 156], "250000": [7, 151], "258": 157, "26": [152, 156], "2667": 7, "27": 7, "27271203001144": 156, "27393269538879395": 151, "2788333": 150, "28": [7, 151], "284": 157, "28465": 156, "285": [7, 151], "29": [7, 148], "2998352": 150, "2d": [2, 4, 57, 72, 82, 121, 154], "2f": 154, "2sampl": 135, "3": [0, 1, 2, 4, 5, 6, 7, 8, 85, 115, 119, 140, 149, 150, 151, 157], "30": [2, 7, 151, 157], "300": [151, 157], "3000": [151, 157], "31": [7, 97, 150, 152], "31020069122314453": 151, "31435371": 152, "3175408840179443": 151, "31937098503112793": 7, "32": [96, 150, 152], "320000": 7, "3243332": 150, "33": [7, 151], "3308333": 7, "34": 152, "34024786949157715": 151, "346": 152, "35": [7, 150, 151, 152, 154], "350000": 150, "3502": 156, "351": 157, "356": 150, "36": [7, 150, 151], "3736095428466797": 151, "3868333": 150, "39": [7, 156], "3975": 150, "4": [0, 1, 2, 4, 6, 7, 20, 97, 109, 115, 117, 119, 120, 121, 122, 151, 152, 156, 157], "40": [2, 151, 157], "400": [151, 157], "4000": [151, 157], "40767502784729004": 151, "4096": 156, "41": [7, 150, 152], "414": 157, "42": 150, "43": 97, "44": 7, "4401": 7, "448": 150, "45": [7, 151], "46": [150, 154], "47": 152, "473": 157, "478": 157, "48": 154, "4840": 7, "49": 150, "497692346572876": 151, "5": [0, 2, 4, 7, 18, 96, 97, 107, 109, 120, 121, 150, 151, 154, 156, 157], "50": [96, 151, 157], "500": [151, 157], "5000": [151, 157], "5008": 150, "5018": 150, "506331920623779": 152, "513": 157, "53": 7, "530000": 152, "532": 152, "54": [150, 152], "542": 152, "55": [0, 115, 117, 119, 154], "551": 150, "559": 157, "5643150806427": 151, "57": [7, 151], "574": 157, "58": 7, "580000": 150, "59": [7, 74, 148, 150], "6": [0, 7, 109, 115, 117, 119, 120, 121, 122, 151, 152, 154, 156, 157], "60": [20, 151, 157], "600": [151, 157], "6000": [151, 157], "63": 156, "630000": 150, "636": 157, "6378137": 107, "650000": 150, "651": 156, "66": [7, 156], "6744518280029297": 151, "697": 156, "699": 157, "7": [0, 6, 7, 150, 151, 152, 154, 157], "70": [151, 157], "700": [151, 157], "7000": [151, 157], "705": [7, 151], "728": [74, 148], "74": 156, "747": [74, 148], "749": 157, "77": 156, "770000": 152, "7736868858337402": 151, "7775": 150, "78": [7, 148], "8": [2, 5, 7, 109, 115, 117, 119, 120, 121, 122, 151, 154, 156, 157], "80": [151, 157], "800": [151, 157], "8000": [151, 157], "811": 156, "814": 157, "831": 7, "8398": 152, "84": 107, "843": 7, "8499099999999998e": 2, "85": 4, "8543333333333": 150, "87": 156, "870000": 150, "88": 7, "8836994171142578": 151, "8875": 150, "89": 7, "9": [6, 7, 151, 154, 156, 157], "90": [7, 148, 151, 157], "900": [151, 157], "9000": [151, 157], "901": [7, 151], "938": 155, "9399449825286865": 7, "95": [1, 2, 4, 7, 115, 117, 119, 122, 151, 154, 156, 157], "950000": 150, "95047692260089": 156, "96": [2, 7, 148, 152], "973": 157, "975": 7, "9788333": [7, 152], "A": [1, 2, 4, 7, 14, 74, 78, 81, 147, 148, 154], "As": [0, 4, 5, 7], "At": [5, 78], "But": [4, 156], "By": [7, 8, 13, 44, 151, 152], "For": [0, 1, 4, 5, 6, 7, 8, 22, 66, 147, 152, 155, 156], "If": [0, 1, 2, 4, 5, 6, 7, 8, 10, 24, 41, 44, 47, 55, 71, 74, 98, 107, 109, 114, 120, 121, 129, 130, 142, 152], "In": [0, 1, 2, 4, 7, 81, 99, 152, 156], "It": [0, 4, 5, 6, 7, 44, 47, 78, 81], "NOT": [104, 105], "No": [78, 81], "Not": 7, "On": 151, "One": 5, "That": 7, "The": [0, 1, 2, 4, 5, 6, 7, 8, 10, 13, 14, 23, 41, 47, 57, 59, 66, 74, 75, 76, 77, 80, 81, 82, 83, 85, 86, 88, 89, 94, 99, 105, 109, 114, 147, 150, 151, 152, 154, 155, 156, 157], "Then": [2, 4, 7, 156], "There": [0, 2, 4, 47], "These": [0, 1, 2, 4, 5, 7, 10, 13, 71, 150, 154], "To": [6, 7, 95, 151, 154], "Will": 105, "With": 4, "_": [7, 152, 157], "__init__": [1, 16, 17, 46, 47, 56, 82, 98], "_a": 7, "_b": 7, "_catalog": 41, "_get_basemap": 107, "_get_marker_styl": 120, "_j": 7, "_static": 150, "_t_test_ndarrai": 1, "_version": 10, "_x": 7, "abil": [0, 5], "abl": [0, 4], "about": [0, 1, 2, 4, 5, 7, 20, 75, 76, 77, 80, 140, 147], "abov": [0, 4, 7, 93, 112, 115, 117, 119, 120, 122], "absolut": 134, "abstract": 16, "abstractbasecatalog": [1, 2, 5, 9, 13, 14, 20, 47, 49, 74, 79, 92, 95, 114, 118, 150], "acceler": [0, 5], "accep": 156, "accept": [0, 4, 82, 140, 150, 156], "access": [4, 7, 16, 17, 46, 96, 97, 107, 150, 151], "accommod": [1, 2, 4, 5, 98, 151], "accomplish": 0, "accord": [1, 2, 5, 38, 70, 71, 98, 120, 121], "account": [0, 7, 10], "accur": 7, "accuraci": 152, "achiev": 151, "acquir": 4, "across": [6, 7], "act": [1, 4], "acta": [7, 74, 148], "activ": [4, 6, 7, 8], "actual": [0, 7, 151], "ad": [1, 2], "add": [0, 1, 6, 7, 106, 149], "addit": [0, 1, 2, 4, 7, 8, 16, 17, 46, 92, 98, 114], "addition": [0, 1, 5], "adher": 8, "adict": 23, "adjac": 4, "administ": 5, "advanc": 154, "advantag": 1, "advic": 5, "after": [0, 4, 71, 105, 152, 156], "aftershock": [0, 1, 2, 7, 147, 150], "aftershock_region": 150, "afterward": 0, "ag": 7, "again": [5, 7], "against": [1, 7, 11, 114, 152], "aggreg": [7, 98], "agre": [7, 8], "aim": 7, "aka": 29, "al": [0, 1, 7, 14, 18, 154], "alarm": 152, "alert": 105, "alertlevel": 105, "align": 135, "all": [0, 1, 2, 4, 6, 7, 10, 20, 28, 30, 31, 32, 48, 73, 104, 109, 120, 156, 157], "allow": [0, 2, 4, 5, 6, 7, 66, 81, 98, 105, 114, 156], "along": [5, 7, 47, 66, 152, 155, 156], "alpha": [1, 79, 109, 121, 154], "alpha_exp": [121, 154], "alreadi": [0, 1, 25, 74, 151, 156, 157], "also": [0, 1, 2, 4, 5, 6, 7, 34, 104, 150, 151, 152, 157], "altern": 7, "although": 4, "alwai": [1, 4, 114], "am": 7, "america": [7, 148], "amount": 87, "an": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 13, 38, 44, 57, 59, 75, 76, 77, 78, 81, 82, 100, 104, 109, 114, 118, 129, 130, 140, 147, 150, 152, 154, 156, 157], "analog": 7, "analys": 152, "analysi": [4, 6, 147, 152], "anchor": 98, "angel": 8, "ani": [0, 1, 4, 5, 6, 7, 8, 13, 47, 100, 114, 152, 155, 156], "annal": 7, "annot": [107, 114], "anoth": [0, 1, 4, 7], "anyth": [1, 78, 81], "api": [0, 104, 105, 150, 151, 152], "appear": 83, "append": [4, 44], "appl": 7, "appli": [0, 7, 16, 17, 18, 46, 47, 92, 104, 105, 109, 150, 151, 152, 156], "applic": [0, 4], "apply_filt": [7, 47, 92, 96, 97, 151], "apply_mct": [0, 47, 150], "appreci": 0, "approach": [1, 2, 4, 6, 7, 152], "appropri": [2, 7, 151], "approxim": [4, 7, 107, 109], "apprx": 107, "ar": [0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 16, 17, 22, 24, 46, 47, 57, 59, 60, 66, 71, 74, 75, 76, 77, 78, 80, 95, 98, 99, 104, 105, 107, 109, 114, 121, 124, 134, 135, 140, 147, 150, 151, 152, 154, 155, 156, 157], "arang": 0, "arbitrari": [4, 82, 152, 155], "area": [4, 47, 107, 152], "arg": [0, 1, 11, 56, 89, 139, 154], "args_dict": 154, "argument": [0, 1, 2, 7, 38, 47, 59, 70, 88, 93, 94, 109, 115, 117, 119, 120, 121, 122, 151, 157], "around": [81, 85, 104, 105, 131, 133], "arrai": [0, 1, 4, 5, 16, 17, 26, 31, 40, 44, 46, 57, 63, 78, 79, 81, 82, 88, 89, 99, 100, 101, 103, 111, 114, 121, 125, 127, 128, 129, 130, 135, 150, 154, 156], "array_lik": 24, "articl": 7, "asc": 105, "ascend": 105, "ascii": [0, 2, 5, 34, 35, 44, 47, 50, 55, 66, 93, 94, 155], "ascii_fnam": 66, "asim": [8, 78], "aso": 2, "aspect": [1, 4, 107], "assert_allclos": 157, "assess": [5, 7, 8, 152], "assign": [2, 7, 94, 151, 154], "associ": [0, 4, 5, 7, 24, 40, 47, 65, 105], "assum": [0, 1, 2, 7, 33, 47, 66, 78, 87, 152, 156], "assumpt": [7, 75, 76, 77, 80], "astronom": 144, "at00ndf1fr": 105, "attempt": [4, 93, 94], "attribut": [5, 16, 17, 46, 47, 56, 57, 82, 98, 155], "attributeerror": 94, "author": [78, 139], "authorit": 5, "auto": [107, 109, 115, 117, 119, 121, 122], "automat": [10, 47, 105, 107, 109, 121], "avail": [8, 16, 17, 46, 147, 150, 151, 152, 155, 156, 157], "averag": [78, 81, 85, 107, 150, 157], "avoid": [6, 7, 105], "awar": [71, 142], "ax": [0, 7, 38, 70, 107, 108, 109, 110, 111, 112, 113, 114, 115, 117, 118, 119, 120, 121, 122, 151, 152, 154, 155, 157], "ax_numb": 156, "ax_spati": 156, "axi": [1, 79, 107, 114, 152, 154, 156], "azimuth": 105, "b": [0, 7, 24, 99], "back": [0, 1, 6, 152], "background": [107, 109, 121], "baesd": 2, "balanc": [124, 134], "ball": 10, "bao": 8, "bar": [7, 120], "base": [0, 9, 12, 13, 14, 15, 16, 20, 34, 47, 56, 90, 94, 102, 107, 116, 120, 122, 132, 144, 148, 150, 153, 154, 155], "basemap": [107, 109, 121, 154], "basic": [47, 157], "baslin": 7, "bayliss": 8, "bayona": 8, "becaus": [0, 1, 2, 4, 5, 6, 104, 105, 151, 152, 155], "becom": [0, 7, 8], "been": [4, 156, 157], "befor": [1, 4, 6, 71, 75, 76, 77, 80, 98, 105, 151, 156, 157], "beforehand": 49, "begin": 7, "behav": 114, "behavior": 7, "behind": 7, "being": [1, 5, 75, 76, 77, 80], "below": [1, 2, 4, 7, 111, 115, 117, 119, 120, 122, 152], "benchmark": [7, 8, 79], "benchmark_forecast": 79, "bespok": [4, 94], "best": 7, "better": [0, 4, 79], "between": [0, 1, 4, 5, 7, 81, 82, 93, 109, 121, 131, 133, 154], "beutin": 8, "bewar": [124, 134], "beyond": 4, "big": [0, 5, 7], "bigg": 7, "bigger": 4, "bin": [1, 2, 4, 5, 6, 7, 9, 14, 24, 37, 39, 40, 47, 49, 52, 53, 57, 59, 63, 64, 66, 67, 69, 72, 78, 81, 88, 89, 94, 98, 99, 100, 114, 115, 117, 119, 122, 132, 133, 147, 151, 156, 157], "bin_edg": [0, 89, 98, 100], "binari": [0, 6, 46, 94], "bind": [151, 157], "black": [7, 107, 109, 120, 121, 154], "blank": [44, 115, 117, 119, 120, 122], "block": [38, 70], "boldsymbol": 7, "bollettino": 150, "bool": [1, 11, 20, 21, 24, 37, 38, 44, 47, 55, 66, 70, 71, 72, 74, 79, 88, 92, 96, 97, 99, 104, 105, 107, 109, 114, 115, 117, 119, 120, 121, 122], "boolean": 4, "border": [107, 109, 121, 154], "both": [0, 1, 4, 5, 7, 9, 71, 78, 152, 156], "boud": 4, "bound": [0, 2, 4, 7, 72, 88, 96, 97, 99, 151, 157], "box": [4, 7, 72, 96, 97, 109], "break": 2, "bristol": 8, "broader": 7, "bsi": 96, "bssa": 106, "bug": 8, "build": [7, 10, 13, 90, 150], "built": [1, 6], "bull": 7, "bulletin": [7, 148], "bump": 10, "bval": 24, "c": [5, 7, 74, 148], "calcul": [5, 6, 7, 81, 132, 154], "calibr": [1, 11], "california": [4, 7, 8, 47, 83, 148, 157], "california_relm_region": [7, 47, 85, 151, 157], "call": [0, 1, 2, 4, 5, 7, 10, 20, 34, 38, 44, 59, 70, 75, 76, 77, 80, 94, 114, 115, 117, 119, 122, 152, 156], "callabl": [2, 59, 85, 93, 94], "can": [0, 1, 2, 4, 5, 6, 7, 10, 20, 34, 38, 44, 47, 57, 66, 71, 74, 78, 81, 82, 83, 88, 89, 93, 107, 109, 112, 114, 115, 119, 121, 122, 147, 150, 151, 152, 154, 155, 156, 157], "cannot": [7, 16, 41, 47], "capabl": [4, 152], "capac": 151, "capsiz": [120, 154], "care": 1, "carre": [38, 70], "carri": 7, "cartesian": [2, 39, 40, 54, 72, 82, 99, 107, 154], "cartesian2d": 82, "cartesiangrid": 82, "cartesiangrid2d": [0, 2, 4, 47, 59, 60, 83, 85, 86, 88, 90, 121], "cartopi": [6, 107, 109, 121, 154], "case": [0, 1, 2, 4, 7, 74, 99], "cast": 41, "cat": 22, "cat_test": 156, "cat_train_2013": 156, "catalog": [3, 8, 12, 13, 14, 15, 47, 48, 49, 75, 76, 77, 78, 79, 80, 92, 93, 95, 96, 97, 104, 105, 109, 114, 116, 118, 122, 153], "catalog_data": 1, "catalog_evalu": [7, 151], "catalog_fil": 0, "catalog_filt": 150, "catalog_forecast_evalu": 151, "catalog_format": [0, 47], "catalog_id": [0, 2, 16, 17, 44, 46], "catalog_load": 93, "catalog_train": 156, "catalog_typ": [0, 47], "catalogforecast": [13, 14, 15, 93, 151, 157], "catalogs_iter": 49, "catalogspatialtestresult": 15, "catalogu": 7, "caus": 7, "cd": 6, "cdf": [7, 127, 128, 131, 135], "cdf1": [124, 134], "cdf2": [124, 134], "cell": [2, 4, 7, 14, 66, 88, 151, 152, 156, 157], "center": [2, 5, 7, 8, 66, 85, 104, 154], "central": 7, "central_latitud": 107, "central_longitud": 154, "certain": [1, 4, 105], "chain": [20, 115, 117, 119, 122], "chanc": 5, "chang": [4, 5, 147, 149], "changelog": 10, "channel": 6, "characterist": [5, 149, 152], "check": [0, 7, 8, 10, 74, 124, 134], "child": 41, "children": 4, "choic": [2, 4, 5, 74, 151], "choos": [4, 152, 155], "chosen": [2, 7, 152, 155], "christophersen": [7, 74, 148], "ci": 10, "ci38457511": 150, "circ": 154, "circl": 7, "circular": [85, 150], "circumst": 1, "cl": 4, "cl_": 7, "clabel": [121, 154], "clabel_fonts": 121, "clarifi": 7, "class": [0, 2, 4, 5, 9, 16, 17, 23, 34, 41, 42, 46, 47, 50, 56, 57, 59, 82, 83, 88, 92, 94, 95, 96, 97, 98, 118, 151, 156], "classic": 7, "classmethod": [2, 4, 22, 23, 34, 35, 36, 50, 59, 66, 82], "clean": 6, "clear": 7, "clearli": 7, "clim": [121, 157], "clone": 6, "close": 82, "cluster": 7, "cm": 154, "cmap": [121, 154], "coars": 4, "coast": [107, 109, 121], "coastlin": [107, 109, 121, 154], "code": [0, 1, 6, 7, 8, 9, 105, 150, 151, 152, 154, 155, 156, 157], "codebas": 135, "codemeta": 10, "collaboratori": [7, 8], "collaps": 2, "collect": [0, 2, 7, 34, 49, 95, 147], "color": [107, 109, 120, 121, 154], "colorbar": 121, "colormap": [121, 154], "colour": 7, "column": [1, 2, 4, 22, 44, 79, 156], "column_name_mapp": 156, "com": 6, "combin": [3, 4, 7, 10, 98], "comcat": [7, 97, 116, 150, 152, 154], "comcat_catalog": [7, 151], "come": [1, 4, 7, 81], "comfort": 0, "comma": [2, 44, 150], "command": [6, 10, 114], "commit": 6, "common": [0, 5], "commonli": [0, 3, 4, 9, 147], "commun": 105, "comp": 7, "comp_arg": 7, "compar": [4, 7, 120, 152, 157], "comparis": 4, "comparison": [40, 114, 148], "compet": 1, "complement": 7, "complet": [1, 4, 18, 47, 92, 107, 109, 121, 150, 154], "complex": [1, 2, 93], "compon": [2, 4, 5, 7], "comput": [1, 2, 4, 5, 6, 7, 11, 14, 16, 17, 37, 43, 46, 49, 66, 75, 76, 77, 78, 79, 80, 114, 123, 124, 125, 126, 133, 134, 135, 151], "compute_stat": [16, 17, 46], "concentr": 152, "concept": 71, "conceptu": [5, 7, 151], "concurr": 7, "cond_likelihood_test_result": 7, "conda": 10, "condit": [1, 7, 71, 75], "conditional_likelihood_test": [1, 7], "conduct": [8, 74], "conf": 10, "confid": 7, "conflict": 6, "connect": [150, 152], "consid": [2, 4, 7, 99], "consist": [2, 4, 5, 15, 59, 75, 100, 147, 151, 152, 156], "constant": [24, 37, 109, 154], "constraint": 5, "construct": [7, 47], "constructor": [2, 4, 34, 41, 56, 59], "contain": [0, 1, 2, 4, 5, 7, 9, 10, 23, 38, 39, 47, 50, 57, 59, 66, 70, 72, 74, 88, 93, 100, 115, 117, 119, 120, 121, 122, 125, 131, 147, 149, 154, 157], "context": [1, 8], "contigu": 5, "conting": 152, "continu": [2, 7, 147], "contribut": [0, 6, 105], "contributor": 105, "control": 151, "conveni": 4, "convent": 120, "converg": 7, "convers": 4, "convert": [0, 2, 41, 92, 93, 95, 137, 138, 139, 141, 142, 150, 156], "coordin": [2, 4, 5, 39, 40, 82, 90, 107], "copi": [6, 10, 71], "coppersmith": [85, 150], "core": [0, 1, 2, 4, 7, 93, 94, 96, 97, 120, 150, 151, 152, 155, 156, 157], "corner": [4, 83, 87, 154], "correct": [7, 10], "correctli": [94, 118], "correspond": [0, 1, 2, 4, 7, 40, 63, 86, 92, 95], "corssa": 7, "could": [0, 2, 34, 114], "count": [0, 2, 5, 7, 13, 37, 39, 40, 47, 51, 54, 67, 72, 78, 100, 105, 114, 132, 150, 151, 152, 156], "countri": [107, 109, 121, 154], "coupl": 2, "cours": 1, "cover": [1, 7, 157], "cpe": 7, "cr": [107, 109, 121, 154], "craft": 4, "creat": [0, 1, 2, 5, 6, 7, 8, 21, 22, 23, 39, 40, 59, 71, 82, 83, 84, 85, 86, 88, 136, 150, 151, 154, 156, 157], "create_space_magnitude_region": [7, 151, 157], "creation": [152, 155], "credit": [0, 10], "criteria": [4, 105], "critial": [115, 117, 119, 122], "critic": 7, "csep": [0, 1, 2, 4, 5, 6, 8, 9, 147, 149, 150, 151, 152, 154, 155, 156, 157], "csep1": [35, 66, 120, 139], "csep2": 44, "csep2_logo_cmyk": 150, "csep_ascii": 35, "csep_mw_bin": 37, "csepcatalog": [0, 1, 4, 15, 96, 97, 109, 156], "csv": [0, 5, 92, 95, 150, 156], "cticks_fonts": 121, "cum_dist": 124, "cuml": 7, "cumul": [0, 1, 7, 26, 113, 124, 134, 152], "cup": 7, "curat": 8, "current": [0, 2, 4, 14, 93, 145, 146], "curv": 149, "custom": [1, 16, 59, 82, 104, 105, 153], "cut": 88, "cyberinfrastructur": 8, "cylindr": 107, "d": [0, 1, 7, 74, 142, 143, 148], "d_": 7, "d_j": 7, "dai": [1, 7, 74, 79, 138, 141], "danijel": 8, "dash": [7, 152], "dat": [66, 83, 94], "data": [0, 1, 2, 4, 5, 6, 7, 8, 9, 13, 16, 17, 21, 46, 47, 49, 50, 55, 58, 59, 65, 71, 73, 74, 78, 98, 100, 104, 105, 111, 114, 118, 121, 151, 152, 154, 155, 156], "data1": 135, "data2": 135, "databas": [104, 105], "datafram": [22, 41, 48, 156], "dataset": [4, 7, 83, 88, 121, 135, 151, 152, 155, 157], "date": [6, 7, 71, 139, 150, 151, 152, 155, 156], "date_access": [16, 17, 46], "datetim": [0, 27, 29, 47, 56, 71, 96, 97, 105, 136, 137, 139, 140, 142, 144, 145, 150], "datetime_to_utc_epoch": [0, 150], "datum": 107, "deactiv": 6, "deal": 5, "decid": [1, 4], "decim": [71, 87, 139, 156], "decimal_year": 71, "decimal_year_to_utc_epoch": 156, "deck2csv": 0, "decreas": 152, "deep": 71, "def": [0, 1, 4], "default": [0, 1, 5, 6, 7, 11, 13, 24, 37, 44, 47, 88, 92, 104, 105, 107, 109, 115, 117, 119, 120, 121, 122, 151, 154, 155], "defin": [0, 1, 2, 4, 5, 7, 9, 23, 47, 66, 82, 88, 94, 100, 105, 107, 109, 114, 147, 150, 154], "definit": [1, 47], "degre": [4, 7, 105], "delet": 104, "delimit": 2, "delta": [66, 78], "delta1": 126, "delta2": 126, "delta_1": [7, 11, 78], "delta_2": [7, 11, 78], "demand": 151, "demonst": 7, "demonstr": [7, 150, 152, 156], "denot": [4, 7], "densiti": [7, 156], "depend": [1, 2, 4, 5, 6, 7, 9, 18, 47, 95, 114], "depth": [0, 2, 5, 28, 44, 96, 97, 105, 150], "depth_0": 2, "depth_1": 2, "der": 7, "deriv": [107, 109, 121], "descend": 105, "describ": [2, 4, 7, 14, 41, 104, 105, 115, 117, 119, 120, 122], "descript": [0, 1, 7, 44, 46, 154], "design": 7, "desir": [0, 4], "despit": 4, "detail": [1, 2, 4, 5, 7, 60, 71, 92], "detailev": 104, "determin": [0, 1, 4, 7, 85, 92, 93, 105, 152, 156], "dev": 6, "develop": [4, 5, 7, 9, 14], "df": [0, 22], "dfcat": 156, "dh": [4, 82, 86, 87, 98, 154], "dh_scale": [83, 88], "diagnos": 7, "diagnost": 7, "diagram": 152, "dict": [16, 17, 34, 38, 42, 46, 70, 85, 95, 109, 114, 115, 117, 119, 120, 121, 122], "dictionari": [0, 23, 38, 42, 70, 109, 115, 117, 119, 120, 122, 154], "did": [0, 7], "diff": 124, "differ": [1, 2, 4, 5, 7, 47, 75, 81, 105, 114, 150, 152, 156], "differenti": 2, "difficult": [4, 7], "difficulti": 0, "dimenion": 131, "dimens": 57, "direct": [0, 5], "directli": [0, 1, 7, 57, 151], "directori": [10, 34, 47, 50, 93, 95, 152, 155, 156], "dirti": 0, "discret": [1, 2, 5, 7, 39, 98], "discretis": 7, "discuss": 8, "disjoint": 147, "disk": 94, "displai": [7, 107, 109, 121, 154], "dist": [10, 135], "distanc": [4, 124, 134], "distinct": 7, "distribut": [1, 5, 6, 7, 11, 13, 75, 78, 81, 113, 124, 126, 131, 134, 154], "divid": 4, "dmw": [7, 89, 151, 157], "do": [0, 1, 4, 5, 7, 55, 104, 105, 150, 151, 156], "doc": 140, "document": [1, 2, 5, 8, 9, 92, 95, 114, 147, 149], "doe": [0, 2, 4, 5, 6, 7, 13, 14, 66, 99, 124, 125, 134, 147], "doesn": [0, 1], "doi": 7, "domain": 7, "domin": 7, "don": [0, 6], "done": [0, 1, 151, 157], "down": [1, 74, 79], "download": [6, 7, 150, 151, 152, 154, 155, 156, 157], "downsampl": 154, "dr": 7, "draft": 10, "draw": [7, 107, 120, 154], "drawback": 4, "drawn": 38, "dt": [44, 137], "dtype": [0, 22, 23, 46, 156], "duck": 1, "due": [7, 157], "dummi": 1, "dure": [0, 7, 148], "dyfi": 105, "e": [2, 4, 6, 7, 9, 20, 107, 109, 114, 148, 156], "e_i": 7, "e_n": 7, "each": [0, 1, 2, 4, 5, 7, 13, 37, 39, 40, 47, 72, 78, 81, 100, 120, 156, 157], "earli": [4, 9], "earlier": 7, "earth": 7, "earthquak": [0, 1, 2, 3, 5, 7, 74, 79, 83, 88, 104, 105, 109, 118, 148, 152, 155, 156], "easi": [2, 4, 152], "easiest": [4, 6], "easili": [0, 2, 82], "east": 107, "ecdf": [113, 123, 124, 127, 134, 135], "edg": [0, 2, 24, 52, 53, 59, 64, 69, 88, 89, 100, 151, 157], "edinburgh": 8, "edit": 6, "editor": 6, "edric": 8, "effect": [5, 7], "effici": [7, 74, 99, 132, 148], "effort": [0, 8], "eg": [1, 4], "either": [0, 78, 81, 93, 95, 114, 147, 151], "elimi": 7, "elimin": 75, "ellipsoid": 107, "ellp": 107, "els": [1, 55], "elsewher": [104, 105], "elst": 7, "empir": [5, 7, 13, 113, 126], "emploi": 78, "empti": [26, 40, 49], "en": 7, "enabl": 105, "enable_limit": 105, "encourag": 8, "end": [1, 2, 7, 47, 96, 97, 105, 150, 152, 154, 155, 156, 157], "end_dat": [7, 66, 71, 152, 155], "end_epoch": [7, 150, 151], "end_magnitud": 89, "end_tim": [7, 47, 56, 96, 97, 150, 151, 152, 154], "endian": [0, 5], "endtim": 105, "enforc": [2, 5], "enough": 2, "ensur": [6, 7, 8], "entir": [2, 5], "entropi": 7, "env": 6, "environ": 6, "epicent": [85, 150], "epicentr": 85, "epidem": [1, 2, 147], "epoch": [18, 29, 143, 146, 150], "epoch_tim": [137, 140], "epoch_time_milli": 140, "eq": 18, "eq10l11": 156, "eqc": 107, "equal": [0, 1, 7, 81, 89, 125, 131, 156], "equidist": 107, "equiv": 7, "equival": [7, 22], "err": 24, "error": [1, 24, 79, 152], "especi": [0, 2, 4], "esri": 154, "esri_imageri": [107, 109, 121, 154], "esri_relief": [107, 109, 121], "esri_terrain": [107, 109, 121, 154], "esri_topo": [107, 109, 121], "essenti": 0, "establish": 7, "estim": [0, 7, 24, 152], "et": [0, 1, 7, 14, 18, 154], "eta": [0, 1, 5, 7, 20, 46, 147, 148], "etc": [4, 151, 152, 157], "euchner": 7, "eval_result": [1, 120], "evalu": [0, 2, 3, 4, 7, 8, 11, 13, 20, 75, 76, 77, 79, 80, 81, 86, 112, 115, 117, 119, 120, 122, 148, 153, 155], "evaluation_result": [1, 11, 75, 76, 77, 79, 80, 108, 112, 115, 117, 119, 122], "evaluationresult": [1, 13, 75, 76, 77, 79, 80, 81, 112, 115, 117, 119, 120, 122], "even": [4, 7, 8, 98, 105], "event": [1, 2, 5, 7, 9, 13, 16, 17, 18, 19, 21, 23, 26, 27, 28, 29, 30, 31, 32, 37, 39, 40, 43, 44, 46, 47, 48, 49, 67, 74, 78, 81, 95, 104, 105, 114, 132, 150, 151, 152, 154, 156], "event_count": [1, 156, 157], "event_dtyp": 46, "event_epoch": 18, "event_id": [0, 2, 16, 17, 44, 46], "event_metadata": 0, "eventid": 104, "eventlist": [0, 16, 17, 46], "eventtyp": 105, "everi": [0, 2, 5], "everywher": 4, "exact": 156, "exampl": [0, 1, 2, 4, 5, 6, 7, 22, 59, 105, 107, 147, 150, 151, 152, 155, 156, 157], "example_rate_zoom": 156, "example_spatial_test": 152, "exce": 7, "except": [2, 151, 157], "excess": [115, 117, 119, 120, 122], "exclud": 7, "exclus": [2, 99, 104], "execut": 10, "exist": 14, "exmapl": 0, "exp": 7, "expand": [4, 98], "expect": [1, 2, 5, 7, 14, 22, 49, 51, 54, 78, 81, 132, 147, 149, 151, 156], "expected_forecast_count": 1, "expected_number_of_ev": 1, "expected_r": [47, 157], "experi": [2, 4, 5, 7, 8], "explain": [1, 2, 149], "explicit": [2, 7, 74], "explicitli": [4, 7, 41, 47], "exploit": 152, "explor": 4, "expon": 121, "exponenti": 154, "express": [1, 147], "extend": [0, 5, 6, 63, 99, 151, 157], "extens": 94, "extent": [38, 70, 107, 109, 121, 154], "extern": 9, "extra": 114, "extrem": 98, "f": [0, 6, 7, 44, 125, 142, 143, 150, 151, 156], "f4": 0, "f_": 7, "f_d": 7, "f_l": 7, "f_n": 7, "fabio": 8, "facilit": [4, 5, 9], "fact": 1, "factor": [83, 87, 88], "fail": [7, 24], "failur": 10, "fall": [7, 99], "fals": [1, 7, 11, 21, 37, 38, 41, 44, 47, 49, 54, 65, 66, 70, 71, 72, 74, 75, 76, 77, 79, 80, 81, 92, 96, 97, 99, 100, 104, 105, 107, 108, 109, 111, 113, 114, 118, 120, 121, 152, 154], "famili": [1, 4, 5, 47, 147], "familiar": 8, "fast": 109, "faster": 6, "fastest": 66, "fault": [85, 150], "favour": 7, "fd": 114, "fdsnw": [104, 105], "featur": [1, 4], "feature_color": 154, "feature_lw": 154, "feel": 7, "fetch": [7, 150, 151, 152, 154], "few": [0, 4, 7, 47], "field": [0, 7, 22, 148, 150], "fieldnam": 150, "fig": [0, 118], "figsiz": [7, 107, 109, 115, 117, 119, 120, 121, 122, 154], "figur": [7, 38, 39, 40, 70, 106, 107, 109, 112, 114, 115, 117, 119, 120, 121, 122, 152, 155], "file": [1, 5, 10, 34, 36, 44, 45, 47, 50, 66, 83, 88, 91, 93, 94, 95, 109, 151, 152, 154, 155, 156], "filenam": [0, 16, 17, 34, 35, 36, 44, 45, 46, 47, 55, 92, 94, 95, 109, 114, 118, 121], "filenotfounderror": 94, "filepath": [34, 94, 152, 155, 156], "filter": [1, 5, 7, 9, 13, 15, 16, 17, 46, 47, 49, 92, 105, 151, 154, 157], "filter_magnitud": 47, "filter_spati": [0, 7, 47, 150, 151, 152], "filter_str": 0, "filter_tim": 47, "filterwarn": 7, "final": [0, 4, 7, 10], "find": [7, 78], "fine": 151, "first": [0, 1, 2, 4, 6, 7, 24, 94, 115, 117, 119, 120, 122], "firstli": 7, "fit": 1, "five": 7, "fix": [4, 7], "flag": [2, 4, 66, 107, 109, 120, 121], "flexibl": [1, 151], "float": [1, 18, 24, 40, 65, 78, 79, 81, 85, 89, 105, 107, 109, 115, 117, 119, 120, 121, 122, 127, 128, 140], "fname": [50, 55, 93, 94], "focal": 105, "focu": [4, 7], "folder": 10, "follow": [0, 1, 2, 4, 5, 6, 7, 8, 18, 66, 120, 150, 151, 152], "font": 107, "fontsiz": [115, 117, 119, 120, 122], "forc": [38, 100], "fore": [7, 75, 76, 77, 80], "forecast": [0, 3, 4, 12, 13, 14, 15, 16, 17, 34, 46, 75, 76, 77, 78, 79, 80, 81, 83, 86, 88, 91, 93, 94, 95, 116, 120, 121, 122, 131, 132, 133, 148, 153], "forecast_1": 81, "forecast_2": 81, "forecast_data": 156, "forecast_multi_grid": 156, "forecast_single_grid": 156, "forg": [6, 10], "fork": 6, "form": [0, 4, 7], "formal": 7, "format": [0, 1, 4, 5, 8, 16, 17, 25, 34, 35, 41, 44, 46, 47, 50, 55, 66, 92, 93, 94, 95, 142, 143, 150, 152, 155, 156, 157], "former": 5, "found": [2, 4, 7, 44, 150, 151, 152, 154], "four": [1, 2, 4], "frac": 7, "fraction": [7, 71, 144], "frame": [111, 156], "framework": [7, 8], "frederico": 8, "free": 5, "freedom": 7, "freeform": 0, "frequenc": 7, "friendli": 0, "from": [1, 2, 5, 7, 9, 10, 14, 18, 22, 23, 24, 27, 36, 44, 46, 48, 51, 54, 59, 66, 81, 89, 94, 100, 101, 103, 105, 107, 109, 112, 114, 115, 116, 117, 119, 120, 121, 122, 126, 131, 132, 135, 136, 139, 142, 143, 150, 152, 154, 155, 156, 157], "from_catalog": [4, 156], "from_custom": 2, "from_datafram": [0, 156], "from_dict": 0, "from_origin": 4, "from_polygon": 82, "from_quadkei": 4, "from_single_resolut": [4, 156], "fromtimestamp": 140, "fruther": 4, "full": [1, 5, 150, 151, 152, 154, 155, 156, 157], "fulli": 7, "func": [2, 59, 93, 94, 121], "func_arg": [2, 59], "function": [1, 2, 3, 4, 5, 7, 13, 20, 24, 34, 35, 41, 49, 59, 71, 74, 75, 76, 77, 80, 82, 85, 89, 92, 93, 94, 95, 98, 102, 104, 105, 107, 113, 114, 118, 124, 131, 134, 135, 140, 149, 150, 151, 152, 154, 155, 156, 157], "fundament": [1, 5], "further": [4, 7, 152], "futur": [2, 4, 9, 93], "g": [6, 7, 20, 109, 114], "gain": [7, 79], "galleri": [150, 151, 152, 153, 154, 155, 156, 157], "gamma": 7, "gamma_": 7, "gamma_l": 7, "gamma_m": 7, "gap": 105, "gcmt": 0, "ge": 7, "gear1": 154, "gear1_downsampled_fnam": 154, "gener": [1, 2, 4, 5, 7, 8, 9, 74, 75, 76, 77, 80, 92, 93, 95, 112, 115, 116, 117, 119, 147, 150, 151, 152, 153, 154, 155, 156, 157], "generallli": 7, "generate_aftershock_region": 150, "geoax": 151, "geograph": [4, 5, 105, 154], "geojson": [104, 105], "geolog": 0, "geophi": [7, 74, 148, 152], "geophys": 7, "geophysica": 7, "geospati": [4, 6], "geq": 7, "gerstenberg": [7, 74, 148], "get": [1, 8, 104, 105, 152, 154], "get_bbox": [109, 121], "get_cartesian": 154, "get_cumulative_number_of_ev": 0, "get_datafram": 22, "get_datetim": 118, "get_epoch_tim": 0, "get_event_by_id": [0, 150], "get_expected_r": [7, 157], "get_index_of": 4, "get_kagan_i1_scor": 152, "get_magnitud": [118, 154], "getter": 0, "gfz": 8, "git": 6, "github": [0, 1, 6, 8], "give": [1, 7], "given": [0, 1, 4, 7, 74, 81, 85, 88, 124, 127, 128, 129, 130, 134, 139], "global": [4, 86, 98, 156], "globe": [4, 107, 109, 121], "go": [0, 1, 6, 150, 151, 152, 154, 155, 156, 157], "goal": 7, "goe": 0, "good": [5, 6], "googl": [107, 109, 121], "gov": [104, 105], "grain": 151, "graph": 152, "great": 1, "greater": [0, 7, 78, 81, 156], "green": [7, 105, 120], "greenwich": 107, "grid": [9, 21, 39, 40, 56, 61, 62, 70, 72, 75, 76, 77, 78, 79, 80, 81, 82, 83, 86, 87, 88, 94, 99, 107, 109, 121, 132, 153, 154, 157], "grid_fonts": [107, 109, 121], "grid_label": [107, 109, 121, 154], "gridded_forecast": [75, 76, 77, 78, 80], "gridded_forecast1": [1, 81], "gridded_forecast2": [1, 81], "gridded_forecast_1": 1, "gridded_forecast_2": 1, "gridded_forecast_evalu": 152, "griddedforecast": [1, 2, 49, 75, 76, 77, 79, 80, 94, 155, 156, 157], "gride": [81, 133], "group": 1, "guid": [0, 7], "guidelin": [0, 6, 8], "gven": 7, "gz": 10, "h": [7, 44, 107, 142, 143, 148], "h5": 94, "h_t": 7, "ha": [1, 2, 4, 7, 78, 81, 156, 157], "had": 156, "han": 8, "hand": [2, 98], "handi": 1, "handl": [2, 4, 6, 13, 23, 93, 114, 118, 140, 154], "happen": 0, "happi": 0, "hard": 94, "hardebeck": 7, "hart": 7, "hash": 99, "hat": 7, "have": [0, 1, 2, 4, 5, 6, 7, 22, 75, 105, 156, 157], "hazard": 147, "hbar": [120, 154], "hdf5": 94, "header": [44, 46, 55, 66], "header_dtyp": 46, "hearn": [0, 9], "height": 107, "heinig": 7, "helmstett": [0, 7, 18, 148], "helmstetter_aftershock": [7, 152], "helmstetter_aftershock_fnam": [7, 152], "helmstetter_m": 7, "helmstetter_mainshock": 155, "helmstetter_mainshock_fnam": [7, 155], "help": [0, 1, 2, 5, 6, 8, 9], "henc": 7, "here": [0, 1, 2, 4, 7, 9, 66, 104, 105, 151, 156], "hermann": 8, "hierarch": 4, "high": [4, 7, 148], "higher": 87, "highli": [6, 8], "highlight": 7, "hires_ssm_italy_fnam": 154, "hist": [115, 117, 119, 122], "histogram": [7, 112, 114, 115, 116, 117, 119, 122, 124], "histori": 7, "hold": 89, "hole": [4, 157], "hopefulli": 7, "horizon": [1, 2, 4, 74, 151, 152, 156, 157], "horizont": [7, 120], "host": [104, 105], "how": [0, 1, 2, 4, 5, 6, 7, 8, 20, 140, 150, 151, 152, 154, 155, 156], "howev": [2, 4, 7], "hte": 37, "html": 140, "http": [6, 7, 104, 105, 140], "human": [2, 16, 17, 46, 88], "hurukawa": 7, "hypothesi": [7, 81], "i": [0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 14, 23, 25, 26, 33, 37, 38, 44, 47, 59, 63, 66, 70, 71, 75, 76, 77, 78, 79, 80, 81, 87, 88, 89, 91, 92, 93, 94, 99, 101, 103, 104, 105, 107, 109, 114, 121, 135, 142, 150, 151, 152, 154, 155, 156, 157], "i1": 149, "i8": 0, "i_": 7, "i_1scor": 152, "i_k": 7, "i_n": 7, "id": [0, 16, 17, 44, 46, 104, 156], "id_col": 44, "idea": [0, 8], "ident": [1, 47], "identif": [16, 17, 46], "identifi": [0, 2, 4, 5, 88, 104], "idm": 63, "idx": [60, 99], "ig_low": 1, "ig_upp": 1, "ignor": [7, 93], "igp": 7, "ii": [7, 8, 78], "ij": 7, "imag": 157, "imageri": 154, "imagin": 0, "immedi": 0, "imoto": [7, 74, 148], "impact": [1, 75], "implement": [0, 1, 2, 4, 5, 7, 8, 9, 16, 20, 71, 87, 94, 99, 100, 104, 105, 112, 115, 117, 118, 119, 122], "implicit": 4, "implicitli": 1, "import": [0, 1, 7, 75, 76, 77, 80, 150, 151, 152, 154, 155, 156, 157], "importantli": 4, "impos": 0, "improv": [4, 8], "in_plac": [20, 21, 71], "includ": [2, 4, 5, 6, 7, 8, 9, 10, 44, 47, 92, 95, 104, 107, 114, 147, 149, 150, 154], "includedelet": 104, "includesupersed": 104, "inclus": [2, 99, 151, 157], "inconsist": 7, "incorpor": [0, 4], "incorrect": 1, "increas": [2, 4, 24, 75, 76, 77, 80, 89, 99, 123], "increment": [2, 7], "incud": 104, "incur": 98, "independ": [1, 2, 5, 7, 148, 152, 155, 156], "index": [0, 4, 39, 40, 57, 60, 82, 99, 103, 156], "indic": [2, 4, 7, 63, 79, 99], "individu": [0, 1, 2, 4, 7], "infin": [0, 2, 63, 99, 151, 157], "influenc": 7, "info": [107, 115, 117, 119, 122], "inform": [1, 2, 4, 5, 7, 9, 10, 16, 17, 23, 46, 47, 55, 79, 105, 140, 147, 150, 151, 152, 154, 157], "information_gain": 1, "infti": 7, "ingest": 1, "ingv": 0, "inject": [75, 76, 77, 80], "input": [4, 5, 9, 104, 105], "insid": [21, 82], "instanc": [4, 20, 21], "instanti": [0, 4, 156], "instead": [0, 7, 71, 98, 100], "instruct": 6, "int": [47, 75, 76, 77, 80, 85, 105, 107, 109, 121, 152], "int_": 7, "int_r": 7, "integ": [0, 78, 81, 150], "integr": [9, 72], "intend": [0, 1, 2, 4], "intens": [7, 105], "inter": 1, "interact": [1, 4, 5, 9, 114, 157], "interest": [0, 7, 65], "interfac": [0, 5, 16, 112, 114, 115, 117, 119, 122], "intern": [5, 8, 154], "internet": [150, 152], "interpol": 102, "interpret": [2, 7], "interv": 7, "introduc": 7, "intut": 4, "inumpyut": 114, "invers": [7, 131], "involv": [8, 10], "ipynb": [150, 151, 152, 154, 155, 156, 157], "isol": [7, 21], "issu": [0, 1, 2, 6, 7, 8, 10, 147], "itali": [7, 88, 154], "italian": [7, 88], "italiano": 150, "item": [109, 115, 117, 119, 120, 122], "iter": [0, 7, 11, 20, 47, 49, 95, 114], "its": [0, 4, 7, 8, 16], "iturrieta": 8, "j": [7, 74, 148, 152], "jackson": [7, 148], "januari": 10, "japan": 7, "jma": [0, 92], "job": 6, "joint": [7, 132], "joint_log_likelihood": 132, "jone": 7, "jordan": 7, "jose": 8, "json": [0, 10, 36, 42, 45, 152, 154], "jupyt": [150, 151, 152, 154, 155, 156, 157], "just": [0, 1, 2, 7, 16, 93, 98, 152], "k": [7, 135, 148], "k_": 7, "k_i": 7, "kagan": [7, 148, 149], "kanto": 7, "kappa": 7, "keep": [6, 7, 71], "kei": [0, 109, 115, 117, 119, 120, 121, 122], "kept": [2, 92], "keyword": [0, 2, 59, 93, 151, 157], "khawaja": 8, "kilmogorov": 11, "kilomet": 105, "kind": [4, 102], "kirsti": 8, "know": [0, 7], "known": [1, 2, 94, 98], "kramer": 7, "kwarg": [0, 2, 17, 22, 23, 34, 35, 36, 46, 50, 56, 59, 85, 92, 93, 94, 95, 96, 97, 102, 118], "l": [4, 115, 154, 156], "l_": 7, "l_result": 154, "l_test_exampl": 154, "label": [106, 107, 121, 154], "labels": 106, "labelspac": 109, "lack": 7, "lam": 131, "lambda": [7, 154, 156], "lambda_": 7, "lambda_1": 7, "lambda_2": 7, "lambda_a": 7, "lambda_b": 7, "lambda_c": 7, "lambda_j": 7, "lambda_n": 7, "lambda_u": 7, "lander": 7, "languag": [1, 2], "larg": [0, 7, 47], "larger": [0, 2, 4, 105], "largest": 2, "last": [2, 7, 10, 99, 125, 149, 151, 157], "lat": [2, 4, 5, 49, 60, 65, 87, 91, 156], "lat_0": [2, 66], "lat_1": [2, 66], "lat_max": [38, 107], "lat_min": [38, 107], "later": [6, 150], "latest": 6, "latitud": [0, 2, 4, 5, 7, 30, 44, 61, 65, 85, 96, 97, 105, 107, 150, 151, 152, 156], "latter": 5, "layer": 2, "le": 7, "lead": [4, 156], "leap": 71, "learn": 1, "least": [7, 78], "leav": 44, "left": [4, 52, 61, 62, 64, 83, 87, 154], "legaci": 2, "legend": [109, 154], "legend_loc": [109, 154], "len": [1, 100], "length": [4, 33, 65, 74, 75, 76, 77, 80, 85, 150], "leq": 7, "less": [4, 7, 101, 103, 105, 152], "let": [0, 1, 7], "lett": 7, "letter": [7, 148], "level": [0, 1, 4, 7, 10, 79, 105, 107, 109, 121, 150, 151, 152, 155, 156, 157], "liberti": 4, "librari": [8, 140, 154], "lightweight": 6, "like": [0, 1, 2, 4, 5, 6, 7, 20, 60, 63, 88, 99, 112, 114, 115, 117, 119, 122, 135], "likelihood": [1, 2, 5, 75, 76, 120, 132, 133, 148, 149, 152, 154, 156], "likelihood_test": [1, 7], "likelihood_test_result": 7, "liklihood": 133, "limit": [7, 20, 105], "line": [7, 107, 109, 114, 121, 152], "linear": [118, 152, 154], "linecolor": [107, 109, 121], "linewidth": [107, 109, 120, 121, 154], "link": [1, 10, 107, 109, 121, 156], "list": [0, 1, 4, 5, 16, 17, 23, 24, 27, 38, 40, 46, 47, 49, 63, 66, 87, 91, 105, 107, 109, 110, 114, 115, 117, 119, 120, 121, 122, 147, 150, 151, 154], "literatur": 147, "liuki": [7, 139], "live": [6, 95], "ll": [1, 151], "lo": 8, "load": [7, 34, 35, 36, 47, 50, 92, 93, 94, 95, 154], "load_ascii": [2, 59], "load_catalog": 0, "load_catalog_forecast": [7, 95, 151, 157], "load_evaluation_result": [152, 154], "load_gridded_forecast": [7, 152, 154, 155], "load_quadtree_forecast": 2, "load_stochastic_event_set": 93, "loader": [1, 2, 35, 47, 55, 59, 92, 93, 94], "loadtxt": 156, "locat": [0, 4, 5, 7, 16, 17, 44, 46, 147, 151, 152, 154, 155, 156], "log": [7, 70, 132, 133, 149, 154], "log10": 154, "log_": 154, "logarithm": [7, 152], "logic": [0, 1, 20, 93, 152], "logical_and": 154, "loglikelihood": 7, "lognitud": 62, "lon": [2, 4, 5, 49, 60, 65, 87, 91, 154, 156], "lon_0": [2, 66, 107], "lon_1": [2, 66], "lon_max": [38, 107], "lon_min": [38, 107], "long": [1, 7, 147, 148, 152], "longitud": [0, 2, 4, 5, 7, 31, 44, 65, 85, 96, 97, 105, 150, 151, 152, 156], "look": [0, 1, 2, 44, 57], "loop": [0, 151, 157], "loss": 74, "lossi": 0, "losspag": 105, "lost": 1, "love": 1, "low": [0, 4, 7], "low_bound": 154, "lower": [0, 2, 4, 7, 61, 62, 83, 87, 99, 151, 154, 157], "lowest": 69, "lt": 7, "m": [0, 4, 6, 44, 74, 112, 117, 142, 143, 148], "m5": 88, "m7": [0, 150], "m71_epoch": 150, "m71_event_id": 150, "m_i": 7, "m_is_j": 7, "m_main": 18, "m_w": 154, "m_x": 7, "machin": 6, "machineri": 8, "maco": 6, "made": [0, 5, 47, 74, 75, 76, 77, 80, 114], "maechl": [7, 8], "mag": [2, 63, 65, 156], "mag_0": [2, 66], "mag_1": [2, 66], "mag_bin": [0, 24, 37, 40], "mag_complet": 18, "mag_scal": [109, 154], "mag_test_result": 7, "mag_tick": [109, 154], "magma": 154, "magnitud": [1, 2, 4, 5, 12, 14, 16, 17, 18, 20, 24, 32, 37, 40, 46, 47, 49, 51, 53, 57, 59, 63, 64, 65, 66, 67, 69, 72, 77, 79, 83, 84, 85, 86, 88, 89, 92, 94, 96, 97, 105, 114, 116, 118, 147, 148, 152, 156], "magnitude_bin": [7, 151, 157], "magnitude_count": 0, "magnitude_test": [1, 7], "magnitude_test_result": 7, "mai": [7, 150], "main": [5, 152, 155, 156], "mainshock": [0, 4, 7, 18, 21, 85, 150], "mainshock_lat": 85, "mainshock_lon": 85, "mainshock_mw": 85, "maintain": [4, 82], "major": 5, "make": [0, 1, 2, 4, 5, 7, 38, 70, 151], "mamba": 6, "manag": 6, "mani": 105, "manipul": [0, 6, 150], "manual": 152, "map": [4, 39, 40, 82, 154, 156], "marcu": 8, "mark": 7, "markedgriddeddataset": [2, 59], "marker": 109, "markercolor": [109, 154], "markers": [109, 154], "marzocchi": [7, 24, 148], "masha": 139, "mask": 82, "master": [6, 10], "match": [1, 2, 7, 104, 105], "math": 149, "mathcal": [7, 154], "mathemat": 147, "mathrm": [7, 154], "matplolib": 114, "matplotlib": [0, 7, 38, 70, 107, 109, 111, 112, 114, 115, 117, 119, 120, 121, 122, 152, 154, 155], "matrix": 131, "max": [7, 8, 96, 97, 98, 129, 150, 151, 152, 156], "max_depth": [96, 97], "max_latitud": [96, 97], "max_longitud": [96, 97], "max_mw": [7, 151, 157], "maxcdi": 105, "maxdepth": 105, "maxgap": 105, "maximum": [4, 7, 96, 97, 105, 156], "maxlatitud": 105, "maxlongitud": 105, "maxmagnitud": 105, "maxmmi": 105, "maxradiu": 105, "maxradiuskm": 105, "maxsig": 105, "mbin": 156, "mc": 18, "md": 10, "meadian": 7, "mean": [0, 1, 4, 7, 79, 81, 151, 152], "meant": 105, "mechan": [4, 105], "median": 81, "mele": 7, "member": [0, 94], "memori": [5, 47], "mention": 4, "mercal": 105, "mercat": 154, "merg": [10, 135], "meridian": 107, "met": 71, "meta_data": 0, "metadata": [16, 17, 46, 95], "metadata_dict": 0, "method": [0, 1, 2, 4, 5, 7, 9, 16, 17, 46, 47, 56, 82, 85, 98, 107, 151, 152, 157], "metr": 107, "metric": [1, 124], "michael": [7, 148], "middl": 107, "midpoint": [40, 83, 88], "might": [0, 1, 2, 5, 7], "mike": [0, 9], "milli": [18, 138, 141], "million": [4, 156], "millisecond": [137, 140], "milner": [7, 148], "min": [1, 7, 96, 97, 98, 130, 150, 151, 152, 156], "min_latitud": [96, 97], "min_longitud": [96, 97], "min_mag": 154, "min_magnitud": [7, 96, 97, 151, 152, 154], "min_mw": [1, 7, 151, 157], "mincdi": 105, "mindepth": 105, "minfelt": 105, "mingap": 105, "miniconda": 6, "miniforg": 6, "minimum": [4, 96, 97, 105, 154], "minlatitud": 105, "minlongitud": 105, "minmagnitud": 105, "minsig": 105, "minut": [150, 151, 152, 154, 155, 156, 157], "mise": 7, "miss": 2, "mix": 0, "mock_forecast_1": 1, "mock_forecast_2": 1, "mockcatalog": 1, "mockforecast": 1, "mode": [6, 14], "model": [0, 1, 2, 4, 5, 7, 8, 13, 47, 74, 78, 120, 133, 147, 148, 152, 154], "model_1": 81, "model_2": 81, "moder": 7, "modif": [7, 75, 114], "modifi": [1, 4, 7, 100, 105, 112, 115, 119, 122], "modul": [0, 4, 9, 71], "molchan": 152, "moment": 105, "monoton": [24, 89, 99, 123], "month": 7, "monthli": 4, "more": [0, 1, 2, 4, 5, 7, 60, 92, 93, 105, 115, 117, 119, 122, 151, 152, 155], "most": [0, 1, 4, 5, 6, 7, 63, 78, 150, 151, 152, 155, 156, 157], "move": 9, "mu": 7, "much": [0, 1], "mult": 4, "multi": 2, "multipl": [1, 2, 6, 7, 34, 87, 107, 114, 150, 151], "multipli": [7, 157], "must": [0, 1, 4, 5, 6, 7, 22, 41, 47, 65, 83, 88, 99, 118, 123, 154], "mutual": 104, "mw": [150, 151, 152, 156], "mw_bin": 154, "mw_ind": 154, "my": 1, "my_catalog_fil": 0, "my_catalog_path": 0, "my_custom_catalog": 0, "my_custom_loader_funct": 0, "my_custom_web_load": 0, "my_dummy_id": 0, "n": [78, 107, 119, 147], "n1": 81, "n2": 81, "n_": [7, 156], "n_cat": [47, 157], "n_fore": [74, 132], "n_fore1": 1, "n_fore2": 1, "n_j": 7, "n_ob": 81, "n_u": 7, "n_x": 7, "naiv": 137, "name": [1, 2, 4, 5, 7, 16, 17, 44, 46, 47, 66, 82, 83, 86, 88, 95, 107, 109, 115, 117, 119, 120, 122, 150, 151, 152, 154, 155, 156], "napl": 8, "nativ": [0, 47, 92, 93, 95], "natur": 132, "nd": [1, 79], "ndarrai": [0, 1, 2, 4, 16, 17, 37, 39, 40, 46, 57, 59, 60, 65, 72, 75, 76, 77, 80, 89, 124], "ndarray_of_target_event_r": 1, "ndk": [0, 92], "ne": 7, "nearest": [102, 156], "necessari": [1, 5, 6, 7, 8, 98], "necessarili": [7, 135], "need": [0, 1, 2, 4, 5, 7, 10, 23, 59, 92, 94, 98, 147, 150, 151, 152, 154, 156, 157], "network": 0, "nevada": 8, "new": [0, 4, 7, 20, 21, 87, 90, 98, 117, 147, 149, 150], "new_cat": 22, "new_region": 90, "newli": 147, "next": 0, "nj": 7, "nmax": [4, 156], "nn00458749": 105, "node": [61, 62], "non": [0, 4, 7, 105, 147], "none": [1, 4, 16, 20, 21, 24, 37, 38, 40, 47, 55, 56, 65, 66, 70, 75, 76, 77, 80, 82, 83, 86, 88, 92, 93, 94, 99, 104, 105, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 129, 130, 150, 151, 152, 156], "nonetheless": 0, "nonetyp": [44, 55], "normal": [1, 7, 75, 120, 152], "normalis": 7, "north": [4, 107], "note": [0, 2, 6, 7, 63, 75, 76, 77, 80, 105, 109, 120, 132, 150, 152, 155, 156], "notebook": [7, 150, 151, 152, 154, 155, 156, 157], "notic": [0, 1], "novemb": 149, "now": [0, 1, 4, 6, 7, 156], "nsim": 7, "ntest_result": 156, "null": [7, 81], "num_magnitude_bin": 57, "num_nod": 156, "num_radii": 85, "num_simul": [7, 75, 76, 77, 80], "num_spatial_bin": 57, "number": [0, 1, 2, 4, 5, 10, 13, 16, 17, 19, 26, 46, 47, 75, 76, 77, 78, 79, 80, 81, 85, 105, 131, 132, 150, 157], "number_test": [7, 151, 156], "number_test_multi_res_result": 156, "number_test_result": [7, 151], "number_test_single_res_result": 156, "numer": 6, "numpi": [0, 1, 2, 4, 5, 6, 16, 17, 26, 31, 37, 40, 46, 57, 59, 75, 76, 77, 78, 80, 81, 89, 111, 114, 121, 125, 127, 128, 135, 150, 154, 156, 157], "nx1": 156, "ob": 7, "object": [0, 1, 2, 4, 5, 7, 11, 27, 47, 59, 66, 71, 90, 96, 97, 104, 105, 107, 109, 112, 114, 115, 117, 119, 120, 121, 122, 136, 137, 140, 142, 144, 151, 152, 154, 155, 156, 157], "obs_count": 126, "obs_nam": 1, "observ": [0, 1, 5, 7, 11, 13, 49, 75, 76, 77, 78, 79, 80, 81, 111, 114, 120, 126, 133, 147, 151, 154, 156], "observed_catalog": [1, 12, 13, 14, 15, 75, 76, 77, 78, 79, 80, 81, 93, 95], "observed_statist": 1, "obspi": 5, "obtain": [0, 5, 7, 150, 152], "occur": [0, 1, 2, 4, 7, 132, 152], "occurr": [1, 2, 147], "off": [90, 105], "offer": [2, 4], "offici": 8, "offset": 105, "often": 20, "old": 87, "omega": 7, "omega_": 7, "omega_1": 7, "omega_2": 7, "omega_n": 7, "onc": [0, 1, 4, 5, 7], "one": [1, 2, 4, 5, 7, 47, 74, 114, 120, 152], "one_sided_low": [7, 120, 154], "onli": [0, 1, 2, 4, 5, 6, 7, 44, 47, 78, 81, 82, 105, 114, 156], "onlin": 0, "onto": [1, 38, 82], "open": [0, 8], "oper": [0, 1, 2, 5, 7, 16, 17, 46, 47, 107, 135, 149, 152, 153], "optim": [1, 4, 151], "option": [0, 4, 9, 38, 40, 65, 70, 94, 95, 115, 117, 119, 120, 122, 152, 154, 156], "orang": 105, "order": [0, 1, 7, 23, 66, 105, 118, 124, 134, 152, 154], "orderbi": 105, "org": [6, 7, 140], "orient": [2, 79], "origin": [0, 4, 5, 7, 41, 83, 87, 105, 147, 151], "origin_tim": [0, 2, 7, 150, 151, 156], "origini": 7, "other": [0, 1, 4, 5, 7, 9, 20, 44, 47, 93], "otherwis": [120, 154], "our": 7, "out": [0, 1, 7, 8, 39, 40, 74, 78, 81], "outcom": 5, "output": [5, 7, 9, 40, 55, 88, 89, 152, 156], "outsid": [7, 13, 21, 49, 60, 106, 152], "over": [0, 1, 6, 7, 72, 73, 147, 151], "overal": 4, "overdispers": 7, "overhead": 98, "overpredict": 7, "overrid": [47, 75, 76, 77, 80], "overridden": 41, "overview": [150, 151, 152, 154, 156, 157], "overwritten": 44, "own": [1, 4, 16], "p": [7, 99, 123, 125, 127, 128], "pablo": 8, "pacakag": 6, "packag": [0, 1, 2, 5, 6, 7, 8, 9, 104, 105, 150, 151, 152, 155, 156, 157], "page": [0, 1, 7, 147, 149, 152], "pager": 105, "pair": [1, 7], "paired_t_test": [1, 7], "panda": [6, 22, 41, 111, 156], "paramet": [2, 5, 7, 13, 14, 15, 16, 17, 18, 20, 21, 22, 24, 34, 37, 38, 40, 44, 45, 46, 47, 49, 50, 55, 56, 59, 60, 63, 65, 66, 70, 71, 72, 74, 75, 76, 77, 78, 79, 80, 81, 83, 85, 86, 87, 88, 90, 92, 93, 94, 95, 96, 97, 99, 104, 105, 107, 109, 111, 112, 114, 115, 117, 118, 119, 120, 121, 122, 124, 125, 127, 128, 131, 132, 133, 135, 137, 140, 142, 144, 150, 151, 152], "parameter": 7, "parametr": [1, 5], "pars": [0, 2, 59, 104, 105], "part": 57, "particip": 8, "particular": [1, 7, 47, 81], "particularli": [7, 8, 75, 76, 77, 80], "pass": [0, 1, 2, 7, 34, 41, 59, 85, 93, 94, 151, 154, 157], "path": [0, 45, 47, 50, 94], "pathnam": 93, "pauk": 8, "per": [1, 4, 7, 114, 154, 156], "percentil": [114, 115, 117, 119, 122], "perform": [1, 5, 7, 11, 12, 13, 14, 15, 47, 75, 76, 77, 79, 80, 111, 135, 150, 152], "perhap": 3, "period": [4, 5, 7, 75, 76, 77, 80, 152, 154, 155], "perl": 0, "permiss": 10, "perspect": 7, "philip": 8, "pl": 0, "place": [2, 47, 99], "plan": [2, 4, 6, 9], "planc": 4, "planet": 7, "plate": [38, 70], "platecarre": [109, 121], "platform": 6, "pleas": [0, 1, 2, 8, 74], "plot": [0, 7, 26, 47, 149, 153], "plot_arg": [7, 38, 70, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 152, 154, 156, 157], "plot_catalog": 154, "plot_comparison_test": 7, "plot_concentration_roc_diagram": 152, "plot_custom": 154, "plot_gridded_forecast": 155, "plot_molchan_diagram": 152, "plot_poisson_consistency_test": [7, 152, 154, 156], "plot_roc_diagram": 152, "plot_spatial_dataset": [109, 154], "plt": [0, 152, 154], "pm": 7, "png": 150, "point": [4, 5, 7, 23, 75, 76, 77, 78, 80, 81, 82, 87, 99, 105], "poisson": [1, 5, 7, 131, 133], "poisson_evalu": [7, 152, 156], "poissonian": [7, 78, 152, 156], "polygon": [0, 4, 39, 40, 82, 90], "poor": 7, "poorli": 7, "popul": [7, 44], "posit": [78, 79, 81, 98, 109, 144, 152], "possibl": [4, 10, 107, 109, 121, 147, 154], "post": 8, "potenti": [0, 4, 7], "potsdam": 8, "power": [1, 7], "pr": 7, "pract": 7, "practic": [4, 7], "pre": [0, 1], "predict": [7, 8, 152], "prefer": [6, 7, 105], "prepar": 0, "present": [2, 147], "preserv": 21, "previou": [4, 5, 38], "previous": [107, 109], "prewritten": 5, "prim": 1, "primari": [16, 17, 46], "primarili": 26, "prime": 107, "principl": 7, "print": [0, 7, 96, 97, 150, 151, 152, 156], "probabalist": 147, "probabilist": [5, 8, 147], "probabl": [7, 78, 127, 128], "probe": 1, "problem": 1, "procedur": [7, 8], "process": [0, 1, 4, 6, 7, 8, 20, 151, 157], "prod_": 7, "produc": [2, 3, 5, 147], "product": [7, 104, 105], "productcod": 105, "producttyp": 105, "program": [2, 152], "proj": 107, "project": [0, 5, 10, 38, 70, 107, 109, 121, 154], "promot": 8, "properli": [0, 1, 7], "properti": [4, 7, 19, 52, 53, 57, 58, 68, 69, 151, 157], "prospect": 7, "prototyp": 1, "provid": [0, 1, 2, 3, 4, 5, 7, 8, 9, 10, 16, 24, 47, 65, 82, 88, 93, 94, 107, 150, 151, 152, 155, 156, 157], "pseduo": 7, "pseudolikelihood": [1, 7, 14], "pseudolikelihood_test": 7, "pseudolikelihood_test_result": 7, "pseudoprospect": [7, 148], "public": [7, 106], "publish": 10, "pull": [0, 6, 10], "pure": 7, "purpos": [0, 7, 26, 66, 156], "push": 10, "put": 1, "py": [10, 150, 151, 152, 153, 154, 155, 156, 157], "pycsep": [1, 2, 3, 4, 5, 7, 9, 16, 17, 149, 150, 152, 155, 156, 157], "pydata": 6, "pypi": [6, 10], "pyplot": [0, 38, 107, 109, 115, 117, 119, 120, 121, 122, 152, 154], "pytest": 6, "python": [1, 6, 8, 10, 104, 105, 137, 140, 144, 150, 151, 152, 154, 155, 156, 157], "quad": 4, "quadk": 4, "quadkei": 2, "quadtre": 153, "quadtree_gridded_forecast_evalu": 156, "quadtreegrid2d": [4, 156], "quadtreegrid2d_from_single_resolut": 156, "quadtreegrid_from_catalog": 156, "qualiti": 8, "quantil": [1, 7, 11, 75, 76, 77, 80, 126], "quantit": 8, "queri": [82, 96, 97, 152], "query_bsi": 150, "query_comcat": [0, 7, 150, 151, 152, 154], "question": 8, "quick": 0, "quicker": 151, "quiet": 4, "quit": 7, "r": [7, 148, 154], "r_d": 7, "r_multi": 156, "r_singl": 156, "radii": 85, "radiu": [85, 105, 150], "rainbow": 154, "rais": [41, 60, 63, 65, 71, 83, 88, 99], "ran": 150, "random": [1, 7, 75, 76, 77, 80, 131], "random_matrix": 131, "random_numb": [75, 76, 77, 80], "rang": [1, 2, 5, 7, 98, 121], "rank": [1, 7, 81], "raster": 107, "rate": [1, 2, 5, 7, 14, 49, 65, 66, 74, 75, 76, 77, 79, 80, 132, 147, 148, 151, 152, 154, 157], "rate_sum": 154, "rates_mw": 154, "rather": 7, "ratio": [7, 107], "re": [7, 156], "reach": 4, "read": [0, 2, 4, 5, 23, 66, 91, 152, 156], "read_csv": 156, "readabl": [2, 16, 17, 46, 88], "reader": [0, 7], "readi": 10, "realiz": [2, 7], "realli": 1, "reason": 0, "receiv": [149, 152], "recip": 6, "recommend": [0, 6, 7, 121, 154], "record": [0, 7], "rectangular": [0, 2, 4], "red": [7, 105, 120, 154], "reduc": [0, 2, 9, 87, 151], "ref": 154, "refer": [0, 4], "reffer": 4, "reflect": 7, "regard": 7, "region": [0, 1, 2, 5, 7, 16, 17, 21, 23, 24, 39, 40, 46, 47, 57, 59, 60, 66, 94, 107, 109, 115, 117, 119, 121, 122, 152, 154], "region_bord": [109, 121], "regular": [4, 5], "regularli": [5, 66], "reject": [7, 154], "rel": [7, 152, 155, 156], "relat": [7, 39, 40], "relationship": [85, 150], "releas": [0, 7, 149], "reliabl": [1, 4, 152], "relm": [2, 4, 7, 83, 157], "remain": 6, "remot": 6, "remov": [7, 21, 115, 117, 119, 120, 122, 152], "renam": 156, "reno": 8, "replac": [104, 105], "repo": 6, "reporsitori": 156, "report": [8, 105], "repositori": [0, 6, 10, 152, 155, 156], "repres": [0, 2, 4, 5, 7, 56, 72, 82, 83, 88, 89, 147, 152], "represent": [0, 2, 5, 23, 27, 114, 139, 142, 147], "reproduc": [7, 75, 76, 77, 80], "repsect": 7, "repurpos": 139, "request": [0, 4, 6, 10], "requir": [0, 1, 2, 4, 5, 6, 7, 8, 10, 41, 44, 47, 87, 94, 154], "requri": 156, "resampl": [83, 88], "rescal": [152, 155, 156], "research": [7, 8, 148], "reset_index": 156, "reset_tim": 118, "reshap": 156, "residu": 7, "resolut": [2, 7, 87, 148], "respect": [7, 47, 81, 154], "respons": [1, 105], "restrict": 0, "result": [1, 7, 8, 11, 13, 104, 105, 110, 112, 115, 117, 119, 120, 122], "results_t": 110, "results_w": 110, "ret_ind": 65, "retain": [0, 1, 2, 7], "retbin": 37, "retrospect": [7, 8], "return": [0, 1, 2, 4, 7, 13, 15, 18, 20, 21, 22, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 37, 38, 39, 40, 41, 42, 44, 48, 49, 50, 51, 52, 53, 54, 55, 58, 59, 60, 61, 62, 63, 64, 65, 67, 69, 70, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 99, 100, 101, 102, 103, 104, 105, 107, 109, 111, 112, 114, 115, 117, 118, 119, 120, 121, 122, 123, 124, 125, 127, 128, 129, 130, 131, 132, 133, 135, 140, 142, 143, 144, 145, 146, 155], "return_error": 24, "review": 105, "reviewstatu": 105, "rhoad": [1, 7, 74, 148], "ridgecrest": [0, 7, 148, 150], "right": [0, 63, 154], "right_continu": [99, 100], "rightarrow": 7, "robinson": 154, "roc": 149, "romania": 7, "root": [152, 155, 156], "routin": [0, 1, 5, 44, 99], "rovida": 7, "row": [2, 4, 66, 156], "rule": [5, 7], "run": [7, 8, 20, 150, 151, 152, 154, 155, 156, 157], "runtimeerror": 65, "ruptur": [4, 7], "s00024": 7, "s256": 0, "s_": 7, "s_j": 7, "s_x": 7, "safe": 105, "said": 7, "same": [0, 1, 2, 4, 5, 7, 34, 65, 74, 75, 76, 77, 80, 81, 111, 120, 135, 151, 152, 157], "sampl": [2, 7, 81, 131, 147, 157], "sandri": 24, "satellit": [107, 109, 121], "satisfi": 125, "save": [0, 2, 4, 10, 45, 109, 114], "savran": [1, 7, 8, 14, 148], "scalar": [114, 124], "scale": [1, 71, 74, 75, 76, 77, 79, 80, 81, 85, 86, 87, 109, 114, 150, 154, 156], "scatter": 109, "sceccod": 6, "schorlemm": [1, 7, 74, 148], "schorlemmn": 8, "scienc": [7, 8], "scientif": [6, 8], "scipi": [6, 131, 133, 135], "scitool": 6, "score": [7, 11, 75, 76, 77, 79, 80, 126, 149], "script": [0, 6, 150, 151, 152, 154, 155, 156, 157], "sdist": 10, "search": 104, "second": [1, 7, 33, 114, 144, 150, 151, 152, 154, 155, 156, 157], "section": [1, 4, 5, 7, 149], "see": [0, 1, 2, 7, 59, 60, 71, 74, 92, 93, 95, 107, 115, 117, 119, 120, 122, 140, 152, 154], "seed": [7, 75, 76, 77, 80], "seem": 4, "seen": 0, "segment": 105, "sei": 7, "seismic": [0, 2, 4, 5, 7, 78, 81, 133, 147, 152, 156], "seismol": 7, "seismolog": [7, 148], "select": [63, 95, 104, 120, 156], "self": [1, 20, 21, 25, 41], "sensit": 7, "separ": [2, 5, 44, 109, 150, 151, 156], "sequenc": [1, 2, 7, 147, 148], "sequenti": 2, "serial": [16, 17, 23, 42, 46], "serializ": 0, "servic": [96, 97], "set": [0, 1, 4, 5, 7, 8, 9, 16, 17, 21, 24, 46, 47, 74, 78, 87, 95, 99, 107, 114, 115, 117, 119, 120, 122, 124, 127, 128, 134, 151, 152, 154, 155, 156, 157], "set_glob": [38, 70, 107, 109, 121, 154], "setup": 10, "sever": 3, "shade": [115, 117, 119, 122], "shakemap": 105, "shall": 7, "share": 5, "shaw": 7, "short": [2, 7, 44, 88, 148], "shoudl": 49, "should": [0, 1, 2, 4, 5, 6, 7, 13, 16, 17, 20, 21, 44, 46, 47, 49, 74, 75, 76, 77, 79, 80, 92, 94, 104, 114, 120, 125, 127, 137, 151, 152, 154, 155, 156, 157], "show": [0, 1, 2, 6, 7, 37, 38, 70, 105, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 151, 152, 154, 155, 157], "shown": [0, 2, 7, 150], "side": [7, 47, 120], "sign": [1, 7, 81], "signific": [7, 105], "significantli": 7, "silva": 8, "sim": 7, "sim_count": 126, "sim_nam": 1, "similar": [5, 7], "similari": 4, "similarli": [2, 5, 7], "simpl": [0, 1, 4, 5, 7, 84, 93, 151], "simplest": [0, 2, 151], "simpli": [4, 7, 98, 152, 156], "simplic": 156, "simul": [1, 2, 5, 7, 20, 75, 76, 77, 80, 114, 120, 152, 154, 156], "sinc": [7, 114, 156], "singl": [1, 2, 7, 34, 48, 79, 81, 92, 114], "sismico": 150, "size": [1, 5, 7, 107, 109, 151, 154, 156], "skill": 152, "skip": 49, "small": [7, 105], "smaller": [0, 4, 105], "smallest": 53, "smirnov": 11, "smooth": [2, 7, 152], "snippet": 0, "so": [0, 1, 5, 7, 20, 25, 66, 75, 151, 154, 155, 156, 157], "soc": 7, "societi": [7, 148], "softwar": [1, 8], "solut": 4, "some": [0, 1, 2, 4, 5, 7, 21, 23, 98, 104, 105, 151, 154, 157], "some_forecast_data1": 1, "some_forecast_data2": 1, "someth": [0, 1, 7], "sometim": 156, "sort": [2, 33, 123, 127, 152], "sound": 1, "sourc": [0, 1, 2, 5, 11, 12, 13, 14, 15, 16, 17, 25, 34, 35, 46, 47, 48, 49, 50, 51, 54, 55, 56, 59, 65, 66, 70, 71, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 150, 151, 152, 154, 155, 156, 157], "south": 4, "southern": [7, 8, 148], "space": [1, 2, 4, 5, 7, 14, 40, 47, 49, 66, 82, 84, 87, 88, 89, 115, 117, 119, 120, 122, 147, 148, 150, 151, 157], "space_magnitude_region": [7, 151], "sparingli": 21, "spatial": [0, 1, 2, 5, 14, 15, 16, 17, 21, 39, 40, 46, 47, 54, 57, 60, 61, 62, 66, 72, 80, 83, 85, 88, 92, 121, 122, 147, 149], "spatial_test": [1, 7, 152, 156], "spatial_test_multi_res_result": 156, "spatial_test_result": [7, 152], "spatial_test_single_res_result": 156, "spatil": 7, "spatio": [7, 57], "spatiotempor": 7, "speak": 4, "specfic": 2, "specif": [2, 4, 5, 71, 104, 105, 112, 115, 117, 119, 152, 155], "specifi": [0, 2, 5, 7, 101, 103, 104, 105, 150, 152, 155, 156], "sphinx": [6, 150, 151, 152, 153, 154, 155, 156, 157], "sphinx_gallery_thumbnail_path": 150, "sphx_glr_tutorials_catalog_filt": 153, "sq": 124, "sqrt": 7, "squar": 7, "stage": 7, "stamen_terrain": [107, 109, 121], "standard": [0, 2, 5, 16, 17, 46, 107, 150, 152, 156], "start": [1, 4, 7, 47, 96, 97, 105, 150, 152, 155, 156], "start_dat": [7, 66, 71, 152, 155], "start_epoch": [7, 150, 151], "start_magnitud": 89, "start_tim": [7, 47, 56, 96, 97, 150, 151, 152, 154], "starttim": 105, "stat": 152, "state": [4, 21, 23, 157], "statement": [0, 5, 20, 123, 147], "static": 1, "statist": [0, 1, 5, 7, 8, 14, 16, 17, 21, 43, 46, 79, 96, 97, 111, 112, 114, 115, 117, 119, 135], "statu": [1, 105, 152, 156], "status": 105, "std": 24, "step": [4, 7, 8, 10, 152], "stest_result": 156, "still": [2, 7, 92, 151], "stochast": [1, 5, 9, 16, 17, 46, 47, 95, 114], "stochastic_event_set": 111, "stock_img": [107, 109, 121], "storag": [2, 5], "store": [0, 1, 2, 4, 5, 16, 17, 34, 35, 46, 47, 57, 79, 150, 154, 155, 156, 157], "str": [0, 16, 17, 20, 34, 44, 45, 46, 47, 50, 55, 88, 92, 93, 94, 95, 104, 105, 107, 109, 114, 115, 117, 119, 120, 121, 122, 142], "straightforward": 7, "strategi": 4, "stress": 7, "strictli": 2, "string": [0, 16, 17, 44, 46, 47, 115, 117, 119, 120, 122, 142, 143, 150, 151], "strong": 75, "strptime_to_utc_datetim": [7, 150, 151, 152, 154, 155], "strptime_to_utc_epoch": 150, "struct": 5, "structur": [0, 4, 5, 16, 17, 46, 104, 105, 150], "struggl": 0, "stub": [0, 1], "stucci": 7, "student": 7, "studi": [7, 8], "style": [5, 21, 106], "sub": [2, 4], "subpackag": [150, 151, 152, 155, 156, 157], "subplot": 0, "subt": 7, "suggest": [0, 5, 7, 8], "suit": 1, "suitabl": [0, 4], "sum": [1, 2, 7, 58, 81, 154, 157], "sum_": 7, "sum_m": 7, "summari": [43, 96, 97], "summaryev": 105, "sup": 135, "supersed": 104, "suppli": [0, 1, 7, 75, 76, 77, 80], "support": [0, 1, 2, 4, 5, 8, 47, 105, 147], "supremum": 134, "sure": [1, 7], "survei": 0, "swap_latlon": 66, "symbol": 7, "symmetr": [7, 81], "sync": 6, "synthet": [1, 2, 5, 7, 46, 147], "system": 6, "t": [0, 1, 6, 7, 79, 110], "t_": 7, "t_critic": 1, "t_k": 7, "t_statist": 1, "t_test": 7, "tab": 2, "tabl": [57, 152], "tabular": 0, "tail": [7, 154], "take": [1, 7, 20, 21, 87, 112, 115, 117, 119], "taken": [7, 18, 135], "tar": 10, "target": [1, 7, 74, 132], "target_catalog": 74, "target_event_log_r": 132, "target_event_r": 1, "target_event_rate_forecast1": 1, "target_event_rate_forecast2": 1, "target_observ": 132, "taroni": 7, "task": [0, 4], "tecton": 147, "tell": 7, "templat": 91, "tempor": 7, "ten": 7, "tensor": 105, "term": [7, 8, 78, 81, 148, 152], "termin": 152, "test": [2, 5, 8, 9, 11, 12, 13, 14, 15, 21, 47, 66, 74, 75, 76, 77, 78, 79, 80, 81, 83, 88, 110, 112, 115, 117, 119, 120, 122, 135, 139, 148, 149, 154, 157], "test_dat": 139, "test_datetim": 71, "test_distribut": 1, "text": [5, 7], "text_fonts": [115, 119, 122], "th": 7, "than": [0, 1, 4, 5, 7, 78, 79, 81, 101, 103, 105, 150, 156], "thei": [0, 1, 2, 4, 5, 47, 104, 105, 151, 152, 155], "them": [0, 1, 2, 5, 7, 10, 151, 157], "theme_6": 7, "theorem": 7, "therebi": 5, "therefor": [0, 1, 5, 7, 152, 156], "thereof": 7, "theses": 7, "thi": [0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 13, 16, 20, 21, 23, 25, 34, 37, 38, 41, 44, 47, 49, 57, 65, 70, 71, 74, 75, 76, 77, 79, 80, 81, 82, 83, 87, 88, 89, 93, 95, 98, 100, 104, 105, 109, 114, 115, 117, 118, 119, 120, 122, 124, 125, 134, 135, 140, 149, 150, 151, 152, 154, 155, 156, 157], "thick": 2, "third": 7, "thoma": 8, "those": [0, 1, 2, 4, 5, 8, 24], "though": [4, 7, 98, 151], "three": [2, 4, 5], "threshold": [4, 156], "throttl": 105, "through": [0, 1, 2, 5, 7, 47, 57, 95, 96, 97, 147, 151, 157], "throughout": 147, "throw": 94, "thu": [2, 4, 5, 7], "tick": [109, 154], "tight_layout": [115, 117, 119, 120, 122, 154], "tile": [4, 107, 109, 121], "tile_sc": [107, 109, 121], "time": [1, 2, 4, 5, 7, 16, 17, 18, 20, 21, 29, 33, 46, 47, 71, 74, 75, 76, 77, 80, 92, 105, 118, 141, 142, 143, 146, 148, 152, 154, 155, 156, 157], "time_in_year": 144, "time_str": [44, 142, 143], "time_util": [7, 71, 150, 151, 152, 154, 155, 156], "timedelta": 144, "timestamp": [27, 29], "timezon": [71, 140, 142], "titl": [7, 109, 115, 117, 119, 120, 121, 122, 154], "title_fonts": [115, 117, 119, 120, 122, 154], "title_s": [109, 121], "to_datafram": 0, "to_dict": 0, "todo": [105, 114], "togeth": 135, "tol": [37, 40, 63, 99], "toler": [1, 40, 79], "too": [7, 106], "tool": [0, 1, 2, 4, 6, 7, 10], "toolkit": [0, 1, 4], "top": [0, 10, 150, 151, 152, 155, 156, 157], "total": [2, 7, 78, 150, 151, 152, 154, 155, 156, 157], "total_ev": 157, "toward": 7, "track": 71, "trajectori": 152, "transform": [0, 107, 109], "transpar": [8, 75, 76, 77, 80, 109, 154], "treat": [4, 5, 63, 88, 151, 157], "tree": 4, "tri": [24, 40], "true": [0, 1, 7, 12, 13, 14, 15, 16, 20, 21, 24, 37, 38, 44, 47, 55, 66, 70, 72, 74, 79, 83, 88, 92, 96, 97, 99, 107, 109, 112, 115, 116, 117, 119, 120, 121, 122, 151, 152, 154, 155, 157], "try": [2, 20], "tupl": [0, 1, 2, 23, 49, 59, 74, 78, 87, 91, 107, 109, 115, 117, 118, 119, 120, 121, 122, 123, 127], "turn": 105, "tutori": [0, 2], "twine": 10, "two": [0, 1, 2, 3, 4, 5, 7, 81, 83, 88, 124, 134, 135, 156], "type": [0, 1, 2, 4, 5, 7, 13, 20, 24, 26, 31, 37, 38, 40, 41, 59, 60, 63, 70, 74, 75, 76, 77, 78, 79, 80, 86, 87, 90, 92, 93, 95, 99, 105, 109, 111, 112, 114, 115, 117, 118, 119, 120, 122, 123, 124, 127, 128, 135, 142, 147], "typeerror": 71, "typic": [2, 5, 7, 21, 34], "tzawar": 136, "u": [0, 5, 7, 66, 98], "ubuntu": 6, "ucerf3": [0, 5, 7, 20, 46, 92, 93, 95, 148], "ucerf3_ascii_format_landers_fnam": [7, 151, 157], "ucerf3catalog": 5, "uk": 6, "unawar": 136, "uncertainti": [1, 2, 7, 147, 152, 156], "undefin": 107, "under": [2, 7, 157], "understand": [4, 5, 7, 94], "uniform": [7, 11, 152], "union": 7, "uniqu": [0, 4], "unit": [2, 5], "uniti": [7, 71], "univers": 8, "unix": 29, "unknown": 107, "unlik": 2, "unnorm": 40, "unqiu": 123, "until": [2, 4, 10, 151], "unwant": 0, "up": [5, 6, 7, 57, 107], "updat": [10, 21, 105, 147, 149, 151], "update_stat": 21, "updatedaft": 105, "upgrad": 6, "upload": 10, "upper": [2, 7, 99], "upper_bound": 154, "upstream": 6, "us": [0, 1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 14, 16, 17, 21, 24, 26, 37, 40, 44, 46, 47, 57, 65, 66, 71, 75, 76, 77, 80, 82, 83, 85, 86, 88, 93, 94, 95, 98, 105, 107, 111, 112, 114, 115, 117, 119, 120, 122, 147, 150, 151, 152, 154, 155, 156, 157], "usag": [104, 105], "use_midpoint": [83, 88], "user": [0, 4, 6, 8], "usg": [9, 104, 105, 150], "usual": [0, 5, 104, 105, 154], "utc": [29, 71, 136, 140, 142], "utc_now_datetim": 150, "utf": 5, "util": [0, 5, 7, 21, 37, 47, 60, 71, 85, 86, 147, 150, 151, 152, 154, 155, 156, 157], "v": 152, "val": [102, 123, 127, 128], "valid": [4, 7, 82, 151, 152], "valu": [0, 1, 2, 4, 7, 24, 44, 65, 66, 79, 81, 88, 91, 98, 100, 101, 102, 103, 105, 107, 109, 114, 121, 125, 127, 128, 129, 130, 133, 135, 150, 154, 157], "valueerror": [41, 60, 63, 83, 88, 99], "van": 7, "var": [112, 115, 117, 119, 122], "vari": 92, "variabl": [1, 44, 46, 152, 156], "varianc": 7, "variat": 7, "ve": 151, "vector": [0, 5, 125, 131], "venv": 6, "verbos": [2, 7, 12, 13, 14, 15, 49, 75, 76, 77, 80, 96, 97, 105, 152, 154, 156, 157], "vere": 7, "veri": [1, 7, 47], "verifi": [0, 1], "versa": 4, "version": [1, 7, 10, 72, 114, 154], "versu": 118, "vertic": [7, 114], "vet": 8, "via": 0, "vice": 4, "viridi": 121, "virtual": 6, "visit": [1, 2, 5, 6], "visual": [5, 8, 118, 151, 152, 154, 156], "von": 7, "voronoi": 4, "vrancea": 7, "w": [7, 110, 132, 148], "w_test": 7, "wa": [2, 7, 16, 17, 46, 49, 147, 151], "wai": [0, 2, 4, 6, 7, 47, 152, 157], "walk": 1, "walkthrough": 1, "want": [0, 1, 4, 6, 7], "warn": [7, 10], "wave": 0, "we": [0, 1, 2, 4, 5, 6, 7, 8, 9, 39, 40, 66, 71, 78, 104, 105, 150, 151, 152, 154, 155, 156, 157], "weaker": 7, "web": [0, 96, 97, 104, 105, 150], "webservic": [107, 109, 121, 154], "welcom": [4, 8, 9], "well": [4, 85, 150], "were": [7, 49, 157], "werner": [1, 7, 8, 148, 154], "wg": 107, "wgs84": 107, "what": [1, 7], "whatev": [1, 22], "wheel": 10, "when": [0, 1, 4, 6, 7, 10, 16, 17, 46, 94, 99, 105, 152], "where": [0, 2, 7, 40, 81, 91, 95, 99, 114, 132, 154], "wherea": 154, "whether": [0, 1, 2, 4, 7, 16, 17, 46, 81, 92], "which": [0, 1, 4, 5, 6, 7, 38, 78, 104, 105, 121, 151, 154, 156, 157], "while": [5, 7], "who": 7, "whole": [1, 4, 79], "whose": [109, 121], "why": 7, "width": [107, 109, 121], "wiemer": [7, 148], "wilcoxan": 7, "wilcoxon": [1, 81], "william": 8, "wise": [151, 157], "wish": 7, "with_datetim": [0, 41], "within": [1, 2, 5, 7, 8, 37, 39, 40, 99, 100, 105, 147, 154], "without": [0, 1, 2, 7, 74, 111], "work": [0, 1, 4, 5, 6, 7, 8, 9, 44, 47, 98, 118, 153], "working_with_catalog_forecast": 157, "wors": 7, "would": [0, 1, 2, 4, 5, 114, 154], "wrapper": [84, 93, 104, 105, 107, 131, 133], "write": [1, 2, 7, 44, 45, 55], "write_ascii": 150, "write_empti": 44, "write_head": 44, "write_json": 152, "written": [7, 46], "www": 7, "x": [2, 4, 5, 7, 10, 88, 102, 107, 113, 115, 117, 119, 120, 122, 123, 125, 127, 128, 129, 130, 152, 157], "x1": 81, "x2": 81, "x_i": 7, "x_new": 100, "xi": 81, "xlabel": [7, 115, 117, 119, 120, 122, 152, 154, 156], "xlabel_fonts": [115, 117, 119, 120, 122], "xml": [83, 88, 91, 94], "xml_filenam": 91, "xticklabels_rot": 7, "xticks_fonts": [115, 117, 119, 120, 122, 154], "xv": 113, "xy": [7, 115, 117, 119, 122], "y": [4, 7, 10, 44, 102, 107, 125, 142, 143, 148], "y_i": 7, "yani": 152, "year": [1, 7, 20, 71, 139, 144, 151, 154, 156], "yellow": 105, "yet": [0, 2, 94], "yi": 81, "yield": 34, "ylabel": 7, "ylabel_fonts": [115, 117, 119, 120, 122, 154], "yml": 6, "you": [0, 1, 2, 4, 5, 6, 7, 8, 10, 41, 95, 100, 105, 152, 155, 156], "your": [1, 4, 5, 6, 105, 152, 155, 156], "your_github_usernam": 6, "yr": 7, "yu": 7, "z": [7, 10], "z_0": 66, "z_1": 66, "zechar": [1, 7, 74, 148], "zero": [1, 2, 78, 79, 81, 125], "zeta": 7, "zip": [150, 151, 152, 154, 155, 156, 157], "zlib": 6, "zmap": [0, 41, 92], "zone": 142, "zoom": [4, 107, 109, 121, 156], "\u03b4mw": 2}, "titles": ["Catalogs", "Evaluations", "Forecasts", "Plots", "Regions", "Core Concepts for Beginners", "Installing pyCSEP", "Theory of CSEP Tests", "pyCSEP: Tools for Earthquake Forecast Developers", "API Reference", "Developer Notes", "csep.core.catalog_evaluations.calibration_test", "csep.core.catalog_evaluations.magnitude_test", "csep.core.catalog_evaluations.number_test", "csep.core.catalog_evaluations.pseudolikelihood_test", "csep.core.catalog_evaluations.spatial_test", "csep.core.catalogs.AbstractBaseCatalog", "csep.core.catalogs.CSEPCatalog", "csep.core.catalogs.CSEPCatalog.apply_mct", "csep.core.catalogs.CSEPCatalog.event_count", "csep.core.catalogs.CSEPCatalog.filter", "csep.core.catalogs.CSEPCatalog.filter_spatial", "csep.core.catalogs.CSEPCatalog.from_dataframe", "csep.core.catalogs.CSEPCatalog.from_dict", "csep.core.catalogs.CSEPCatalog.get_bvalue", "csep.core.catalogs.CSEPCatalog.get_csep_format", "csep.core.catalogs.CSEPCatalog.get_cumulative_number_of_events", "csep.core.catalogs.CSEPCatalog.get_datetimes", "csep.core.catalogs.CSEPCatalog.get_depths", "csep.core.catalogs.CSEPCatalog.get_epoch_times", "csep.core.catalogs.CSEPCatalog.get_latitudes", "csep.core.catalogs.CSEPCatalog.get_longitudes", "csep.core.catalogs.CSEPCatalog.get_magnitudes", "csep.core.catalogs.CSEPCatalog.length_in_seconds", "csep.core.catalogs.CSEPCatalog.load_ascii_catalogs", "csep.core.catalogs.CSEPCatalog.load_catalog", "csep.core.catalogs.CSEPCatalog.load_json", "csep.core.catalogs.CSEPCatalog.magnitude_counts", "csep.core.catalogs.CSEPCatalog.plot", "csep.core.catalogs.CSEPCatalog.spatial_counts", "csep.core.catalogs.CSEPCatalog.spatial_magnitude_counts", "csep.core.catalogs.CSEPCatalog.to_dataframe", "csep.core.catalogs.CSEPCatalog.to_dict", "csep.core.catalogs.CSEPCatalog.update_catalog_stats", "csep.core.catalogs.CSEPCatalog.write_ascii", "csep.core.catalogs.CSEPCatalog.write_json", "csep.core.catalogs.UCERF3Catalog", "csep.core.forecasts.CatalogForecast", "csep.core.forecasts.CatalogForecast.get_dataframe", "csep.core.forecasts.CatalogForecast.get_expected_rates", "csep.core.forecasts.CatalogForecast.load_ascii", "csep.core.forecasts.CatalogForecast.magnitude_counts", "csep.core.forecasts.CatalogForecast.magnitudes", "csep.core.forecasts.CatalogForecast.min_magnitude", "csep.core.forecasts.CatalogForecast.spatial_counts", "csep.core.forecasts.CatalogForecast.write_ascii", "csep.core.forecasts.GriddedForecast", "csep.core.forecasts.GriddedForecast.data", "csep.core.forecasts.GriddedForecast.event_count", "csep.core.forecasts.GriddedForecast.from_custom", "csep.core.forecasts.GriddedForecast.get_index_of", "csep.core.forecasts.GriddedForecast.get_latitudes", "csep.core.forecasts.GriddedForecast.get_longitudes", "csep.core.forecasts.GriddedForecast.get_magnitude_index", "csep.core.forecasts.GriddedForecast.get_magnitudes", "csep.core.forecasts.GriddedForecast.get_rates", "csep.core.forecasts.GriddedForecast.load_ascii", "csep.core.forecasts.GriddedForecast.magnitude_counts", "csep.core.forecasts.GriddedForecast.magnitudes", "csep.core.forecasts.GriddedForecast.min_magnitude", "csep.core.forecasts.GriddedForecast.plot", "csep.core.forecasts.GriddedForecast.scale_to_test_date", "csep.core.forecasts.GriddedForecast.spatial_counts", "csep.core.forecasts.GriddedForecast.sum", "csep.core.forecasts.GriddedForecast.target_event_rates", "csep.core.poisson_evaluations.conditional_likelihood_test", "csep.core.poisson_evaluations.likelihood_test", "csep.core.poisson_evaluations.magnitude_test", "csep.core.poisson_evaluations.number_test", "csep.core.poisson_evaluations.paired_t_test", "csep.core.poisson_evaluations.spatial_test", "csep.core.poisson_evaluations.w_test", "csep.core.regions.CartesianGrid2D", "csep.core.regions.california_relm_region", "csep.core.regions.create_space_magnitude_region", "csep.core.regions.generate_aftershock_region", "csep.core.regions.global_region", "csep.core.regions.increase_grid_resolution", "csep.core.regions.italy_csep_region", "csep.core.regions.magnitude_bins", "csep.core.regions.masked_region", "csep.core.regions.parse_csep_template", "csep.load_catalog", "csep.load_catalog_forecast", "csep.load_gridded_forecast", "csep.load_stochastic_event_sets", "csep.query_bsi", "csep.query_comcat", "csep.utils.basic_types.AdaptiveHistogram", "csep.utils.calc.bin1d_vec", "csep.utils.calc.discretize", "csep.utils.calc.find_nearest", "csep.utils.calc.func_inverse", "csep.utils.calc.nearest_index", "csep.utils.comcat.get_event_by_id", "csep.utils.comcat.search", "csep.utils.plots.add_labels_for_publication", "csep.utils.plots.plot_basemap", "csep.utils.plots.plot_calibration_test", "csep.utils.plots.plot_catalog", "csep.utils.plots.plot_comparison_test", "csep.utils.plots.plot_cumulative_events_versus_time", "csep.utils.plots.plot_distribution_test", "csep.utils.plots.plot_ecdf", "csep.utils.plots.plot_histogram", "csep.utils.plots.plot_likelihood_test", "csep.utils.plots.plot_magnitude_histogram", "csep.utils.plots.plot_magnitude_test", "csep.utils.plots.plot_magnitude_versus_time", "csep.utils.plots.plot_number_test", "csep.utils.plots.plot_poisson_consistency_test", "csep.utils.plots.plot_spatial_dataset", "csep.utils.plots.plot_spatial_test", "csep.utils.stats.binned_ecdf", "csep.utils.stats.cumulative_square_diff", "csep.utils.stats.ecdf", "csep.utils.stats.get_quantiles", "csep.utils.stats.greater_equal_ecdf", "csep.utils.stats.less_equal_ecdf", "csep.utils.stats.max_or_none", "csep.utils.stats.min_or_none", "csep.utils.stats.poisson_inverse_cdf", "csep.utils.stats.poisson_joint_log_likelihood_ndarray", "csep.utils.stats.poisson_log_likelihood", "csep.utils.stats.sup_dist", "csep.utils.stats.sup_dist_na", "csep.utils.time_utils.create_utc_datetime", "csep.utils.time_utils.datetime_to_utc_epoch", "csep.utils.time_utils.days_to_millis", "csep.utils.time_utils.decimal_year", "csep.utils.time_utils.epoch_time_to_utc_datetime", "csep.utils.time_utils.millis_to_days", "csep.utils.time_utils.strptime_to_utc_datetime", "csep.utils.time_utils.strptime_to_utc_epoch", "csep.utils.time_utils.timedelta_from_years", "csep.utils.time_utils.utc_now_datetime", "csep.utils.time_utils.utc_now_epoch", "Terms and Definitions", "Referenced Publications", "Development Roadmap", "Catalogs operations", "Catalog-based Forecast Evaluation", "Grid-based Forecast Evaluation", "Tutorials", "Plot customizations", "Plotting gridded forecast", "Quadtree Grid-based Forecast Evaluation", "Working with catalog-based forecasts"], "titleterms": {"": [7, 152], "0": 149, "1": [10, 154], "2": [10, 154], "3": [10, 154], "4": 154, "6": 149, "about": 8, "abstractbasecatalog": 16, "access": [0, 9], "adaptivehistogram": 98, "add_labels_for_publ": 106, "alreadi": 4, "api": 9, "apply_mct": 18, "argument": [3, 154], "attribut": 0, "avail": 3, "base": [1, 2, 4, 5, 7, 151, 152, 156, 157], "basic": [0, 9], "basic_typ": 98, "beginn": 5, "bin": [0, 154], "bin1d_vec": 99, "binned_ecdf": 123, "build": 1, "calc": [99, 100, 101, 102, 103], "calcul": [9, 152], "calibration_test": 11, "california_relm_region": 83, "cartesian": 4, "cartesiangrid2d": 82, "catalog": [0, 1, 2, 4, 5, 7, 9, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 147, 150, 151, 152, 154, 156, 157], "catalog_evalu": [11, 12, 13, 14, 15], "catalogforecast": [47, 48, 49, 50, 51, 52, 53, 54, 55], "chang": 10, "check": 157, "cl": 7, "class": 1, "code": 10, "comcat": [0, 9, 104, 105, 151], "compar": 1, "comparison": 7, "complet": 0, "comput": [152, 156, 157], "concept": 5, "conda": 6, "conditional_likelihood_test": 75, "consist": [1, 7], "contact": 8, "content": [0, 1, 2, 3, 4, 147], "contribut": 8, "contributor": 8, "convent": 2, "core": [5, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91], "count": 157, "creat": [4, 10], "create_space_magnitude_region": 84, "create_utc_datetim": 136, "csep": [7, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146], "csepcatalog": [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45], "cumulative_square_diff": 124, "curv": 152, "custom": [0, 2, 154], "data": [57, 157], "datafram": 0, "dataset": 154, "datetime_to_utc_epoch": 137, "days_to_milli": 138, "decimal_year": 139, "default": 2, "defin": [151, 152, 155, 156, 157], "definit": 147, "depend": [0, 147], "desir": 150, "develop": [6, 8, 10, 149], "discret": 100, "distribut": 10, "earthquak": [4, 8, 147], "ecdf": 125, "end": 151, "epoch_time_to_utc_datetim": 140, "evalu": [1, 5, 9, 151, 152, 154, 156], "event": [0, 147, 157], "event_count": [19, 58], "exampl": 154, "expect": 157, "file": [0, 2], "filter": [0, 20, 150, 152], "filter_spati": 21, "find_nearest": 101, "forecast": [1, 2, 5, 7, 8, 9, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 147, 151, 152, 154, 155, 156, 157], "format": 2, "from": [0, 4, 6, 151], "from_custom": 59, "from_datafram": 22, "from_dict": 23, "func_invers": 102, "function": 0, "generate_aftershock_region": 85, "get_bvalu": 24, "get_csep_format": 25, "get_cumulative_number_of_ev": 26, "get_datafram": 48, "get_datetim": 27, "get_depth": 28, "get_epoch_tim": 29, "get_event_by_id": 104, "get_expected_r": 49, "get_index_of": 60, "get_latitud": [30, 61], "get_longitud": [31, 62], "get_magnitud": [32, 64], "get_magnitude_index": 63, "get_quantil": 126, "get_rat": 65, "github": 10, "global": 154, "global_region": 86, "goal": 8, "greater_equal_ecdf": 127, "grid": [1, 2, 4, 5, 7, 152, 155, 156], "griddedforecast": [56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74], "i_1": 152, "includ": 0, "increase_grid_resolut": 87, "independ": 147, "inform": 0, "instal": 6, "interv": 150, "introduct": [0, 3], "italy_csep_region": 88, "kagan": 152, "l": 7, "length_in_second": 33, "less_equal_ecdf": 128, "librari": [150, 151, 152, 155, 156, 157], "likelihood": 7, "likelihood_test": 76, "list": 8, "load": [0, 4, 9, 150, 151, 152, 155, 156, 157], "load_ascii": [50, 66], "load_ascii_catalog": 34, "load_catalog": [35, 92], "load_catalog_forecast": 93, "load_gridded_forecast": 94, "load_json": 36, "load_stochastic_event_set": 95, "loader": 0, "m": 7, "magnitud": [0, 7, 52, 68, 150, 151, 154, 157], "magnitude_bin": 89, "magnitude_count": [37, 51, 67], "magnitude_test": [12, 77], "masked_region": 90, "max_or_non": 129, "metadata": 0, "millis_to_dai": 141, "min_magnitud": [53, 69], "min_or_non": 130, "mock": 1, "multi": [4, 156], "multipl": 154, "n": 7, "nearest_index": 103, "new": 10, "note": 10, "number": [7, 151, 156], "number_test": [13, 78], "obtain": 151, "oper": [9, 150], "paired_t_test": 79, "panda": 0, "parse_csep_templ": 91, "perform": 151, "pip": 6, "plot": [3, 9, 38, 70, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 151, 152, 154, 155, 156, 157], "plot_basemap": 107, "plot_calibration_test": 108, "plot_catalog": 109, "plot_comparison_test": 110, "plot_cumulative_events_versus_tim": 111, "plot_distribution_test": 112, "plot_ecdf": 113, "plot_histogram": 114, "plot_likelihood_test": 115, "plot_magnitude_histogram": 116, "plot_magnitude_test": 117, "plot_magnitude_versus_tim": 118, "plot_number_test": 119, "plot_poisson_consistency_test": 120, "plot_spatial_dataset": 121, "plot_spatial_test": 122, "poisson": [152, 156], "poisson_evalu": [75, 76, 77, 78, 79, 80, 81], "poisson_inverse_cdf": 131, "poisson_joint_log_likelihood_ndarrai": 132, "poisson_log_likelihood": 133, "prepar": 1, "project": 8, "properti": [152, 155], "pseudo": 7, "pseudolikelihood_test": 14, "public": [1, 148], "pycsep": [0, 6, 8, 10], "quadkei": 4, "quadtre": [2, 4, 156], "query_bsi": 96, "query_comcat": 97, "quick": 157, "rang": [150, 154], "refer": [1, 7, 9], "referenc": 148, "region": [4, 9, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 150, 151, 156, 157], "releas": 10, "requir": [150, 151, 152, 155, 156, 157], "resolut": [4, 156], "result": [151, 152, 154, 156], "roadmap": 149, "roc": 152, "saniti": 157, "scale_to_test_d": 71, "score": 152, "search": 105, "select": 154, "set": 147, "singl": [4, 156], "sourc": [6, 10], "space": [0, 152], "spatial": [4, 7, 150, 151, 152, 154, 156, 157], "spatial_count": [39, 54, 72], "spatial_magnitude_count": 40, "spatial_test": [15, 80], "start": 151, "stat": [123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135], "statist": 9, "stochast": 147, "store": 152, "strptime_to_utc_datetim": 142, "strptime_to_utc_epoch": 143, "sum": 73, "sup_dist": 134, "sup_dist_na": 135, "tabl": [0, 1, 2, 3, 4, 147], "target_event_r": 74, "term": 147, "test": [1, 4, 7, 151, 152, 156], "theori": 7, "time": [0, 9, 147, 150, 151], "time_util": [136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146], "timedelta_from_year": 144, "to_datafram": 41, "to_dict": 42, "tool": 8, "train": 156, "tutori": 153, "type": 9, "u": 8, "ucerf3catalog": 46, "update_catalog_stat": 43, "us": 6, "utc_now_datetim": 145, "utc_now_epoch": 146, "util": [4, 9, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146], "v0": 149, "virtualenv": 6, "w_test": 81, "work": [2, 157], "write": [0, 150], "write_ascii": [44, 55], "write_json": 45}}) \ No newline at end of file diff --git a/tutorials/catalog_filtering.html b/tutorials/catalog_filtering.html new file mode 100644 index 00000000..3615a3ca --- /dev/null +++ b/tutorials/catalog_filtering.html @@ -0,0 +1,357 @@ + + + + + + + + + Catalogs operations — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + +
+

Catalogs operations

+

This example demonstrates how to perform standard operations on a catalog. This example requires an internet +connection to access ComCat.

+
+
Overview:
    +
  1. Load catalog from ComCat

  2. +
  3. Create filtering parameters in space, magnitude, and time

  4. +
  5. Filter catalog using desired filters

  6. +
  7. Write catalog to standard CSEP format

  8. +
+
+
+
+

Load required libraries

+

Most of the core functionality can be imported from the top-level csep package. Utilities are available from the +csep.utils subpackage.

+
import csep
+from csep.core import regions
+from csep.utils import time_utils, comcat
+# sphinx_gallery_thumbnail_path = '_static/CSEP2_Logo_CMYK.png'
+
+
+
+
+

Load catalog

+

PyCSEP provides access to the ComCat web API using csep.query_comcat() and to the Bollettino Sismico Italiano +API using csep.query_bsi(). These functions require a datetime.datetime to specify the start and end +dates.

+ +
Fetched ComCat catalog in 15.215977907180786 seconds.
+
+Downloaded catalog from ComCat with following parameters
+Start Date: 2019-01-01 12:01:46.950000+00:00
+End Date: 2024-10-21 13:23:13.650000+00:00
+Min Latitude: 31.5008 and Max Latitude: 42.8543333333333
+Min Longitude: -125.3975 and Max Longitude: -113.1001667
+Min Magnitude: 2.5
+Found 11363 events in the ComCat catalog.
+
+        Name: None
+
+        Start Date: 2019-01-01 12:01:46.950000+00:00
+        End Date: 2024-10-21 13:23:13.650000+00:00
+
+        Latitude: (31.5008, 42.8543333333333)
+        Longitude: (-125.3975, -113.1001667)
+
+        Min Mw: 2.5
+        Max Mw: 7.1
+
+        Event Count: 11363
+
+
+
+
+

Filter to magnitude range

+

Use the csep.core.catalogs.AbstractBaseCatalog.filter() to filter the catalog. The filter function uses the field +names stored in the numpy structured array. Standard fieldnames include ‘magnitude’, ‘origin_time’, ‘latitude’, ‘longitude’, +and ‘depth’.

+
catalog = catalog.filter('magnitude >= 3.5')
+print(catalog)
+
+
+
Name: None
+
+Start Date: 2019-01-13 09:35:49.870000+00:00
+End Date: 2024-10-21 00:32:11.130000+00:00
+
+Latitude: (31.5018, 42.7775)
+Longitude: (-125.3868333, -113.1191667)
+
+Min Mw: 3.5
+Max Mw: 7.1
+
+Event Count: 1371
+
+
+
+
+

Filter to desired time interval

+

We need to define desired start and end times for the catalog using a time-string format. PyCSEP uses integer times for doing +time manipulations. Time strings can be converted into integer times using +csep.utils.time_utils.strptime_to_utc_epoch(). The csep.core.catalog.AbstractBaseCatalog.filter() also +accepts a list of strings to apply multiple filters. Note: The number of events may differ if this script is ran +at a later date than shown in this example.

+
# create epoch times from time-string formats
+start_epoch = csep.utils.time_utils.strptime_to_utc_epoch('2019-07-06 03:19:54.040000')
+end_epoch = csep.utils.time_utils.strptime_to_utc_epoch('2019-09-21 03:19:54.040000')
+
+# filter catalog to magnitude ranges and times
+filters = [f'origin_time >= {start_epoch}', f'origin_time < {end_epoch}']
+catalog = catalog.filter(filters)
+print(catalog)
+
+
+
Name: None
+
+Start Date: 2019-07-06 03:20:36.080000+00:00
+End Date: 2019-09-19 09:59:46.580000+00:00
+
+Latitude: (32.2998352, 41.1244)
+Longitude: (-125.0241667, -115.3243332)
+
+Min Mw: 3.5
+Max Mw: 5.5
+
+Event Count: 356
+
+
+
+
+

Filter to desired spatial region

+

We use a circular spatial region with a radius of 3 average fault lengths as defined by the Wells and Coppersmith scaling +relationship. PyCSEP provides csep.utils.spatial.generate_aftershock_region() to create an aftershock region +based on the magnitude and epicenter of an event.

+

We use csep.utils.comcat.get_event_by_id() the ComCat API provided by the USGS to obtain the event information +from the M7.1 Ridgecrest mainshock.

+
m71_event_id = 'ci38457511'
+event = comcat.get_event_by_id(m71_event_id)
+m71_epoch = time_utils.datetime_to_utc_epoch(event.time)
+
+# build aftershock region
+aftershock_region = regions.generate_aftershock_region(event.magnitude, event.longitude, event.latitude)
+
+# apply new aftershock region and magnitude of completeness
+catalog = catalog.filter_spatial(aftershock_region).apply_mct(event.magnitude, m71_epoch)
+print(catalog)
+
+
+
Name: None
+
+Start Date: 2019-07-06 03:22:35.630000+00:00
+End Date: 2019-09-08 14:07:23.350000+00:00
+
+Latitude: (35.448, 36.1823333)
+Longitude: (-117.8875, -117.2788333)
+
+Min Mw: 3.5
+Max Mw: 5.5
+
+Event Count: 234
+
+
+
+
+

Write catalog

+

Use csep.core.catalogs.AbstractBaseCatalog.write_ascii() to write the catalog into the comma separated value format.

+
catalog.write_ascii('2019-11-11-comcat.csv')
+
+
+

Total running time of the script: (0 minutes 18.551 seconds)

+ +

Gallery generated by Sphinx-Gallery

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/tutorials/catalog_forecast_evaluation.html b/tutorials/catalog_forecast_evaluation.html new file mode 100644 index 00000000..e9865cc9 --- /dev/null +++ b/tutorials/catalog_forecast_evaluation.html @@ -0,0 +1,371 @@ + + + + + + + + + Catalog-based Forecast Evaluation — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + +
+

Catalog-based Forecast Evaluation

+

This example shows how to evaluate a catalog-based forecasting using the Number test. This test is the simplest of the +evaluations.

+
+
Overview:
    +
  1. Define forecast properties (time horizon, spatial region, etc).

  2. +
  3. Access catalog from ComCat

  4. +
  5. Filter catalog to be consistent with the forecast properties

  6. +
  7. Apply catalog-based number test to catalog

  8. +
  9. Visualize results for catalog-based forecast

  10. +
+
+
+
+

Load required libraries

+

Most of the core functionality can be imported from the top-level csep package. Utilities are available from the +csep.utils subpackage.

+
import csep
+from csep.core import regions, catalog_evaluations
+from csep.utils import datasets, time_utils
+
+
+
+
+

Define start and end times of forecast

+

Forecasts should define a time horizon in which they are valid. The choice is flexible for catalog-based forecasts, because +the catalogs can be filtered to accommodate multiple end-times. Conceptually, these should be separate forecasts.

+
start_time = time_utils.strptime_to_utc_datetime("1992-06-28 11:57:35.0")
+end_time = time_utils.strptime_to_utc_datetime("1992-07-28 11:57:35.0")
+
+
+
+
+

Define spatial and magnitude regions

+

Before we can compute the bin-wise rates we need to define a spatial region and a set of magnitude bin edges. The magnitude +bin edges # are the lower bound (inclusive) except for the last bin, which is treated as extending to infinity. We can +bind these # to the forecast object. This can also be done by passing them as keyword arguments +into csep.load_catalog_forecast().

+
# Magnitude bins properties
+min_mw = 4.95
+max_mw = 8.95
+dmw = 0.1
+
+# Create space and magnitude regions. The forecast is already filtered in space and magnitude
+magnitudes = regions.magnitude_bins(min_mw, max_mw, dmw)
+region = regions.california_relm_region()
+
+# Bind region information to the forecast (this will be used for binning of the catalogs)
+space_magnitude_region = regions.create_space_magnitude_region(region, magnitudes)
+
+
+
+
+

Load catalog forecast

+

To reduce the file size of this example, we’ve already filtered the catalogs to the appropriate magnitudes and +spatial locations. The original forecast was computed for 1 year following the start date, so we still need to filter the +catalog in time. We can do this by passing a list of filtering arguments to the forecast or updating the class.

+

By default, the forecast loads catalogs on-demand, so the filters are applied as the catalog loads. On-demand means that +until we loop over the forecast in some capacity, none of the catalogs are actually loaded.

+

More fine-grain control and optimizations can be achieved by creating a csep.core.forecasts.CatalogForecast directly.

+
forecast = csep.load_catalog_forecast(datasets.ucerf3_ascii_format_landers_fname,
+                                      start_time = start_time, end_time = end_time,
+                                      region = space_magnitude_region,
+                                      apply_filters = True)
+
+# Assign filters to forecast
+forecast.filters = [f'origin_time >= {forecast.start_epoch}', f'origin_time < {forecast.end_epoch}']
+
+
+
+
+

Obtain evaluation catalog from ComCat

+

The csep.core.forecasts.CatalogForecast provides a method to compute the expected number of events in spatial cells. This +requires a region with magnitude information.

+

We need to filter the ComCat catalog to be consistent with the forecast. This can be done either through the ComCat API +or using catalog filtering strings. Here we’ll use the ComCat API to make the data access quicker for this example. We +still need to filter the observed catalog in space though.

+
# Obtain Comcat catalog and filter to region.
+comcat_catalog = csep.query_comcat(start_time, end_time, min_magnitude=forecast.min_magnitude)
+
+# Filter observed catalog using the same region as the forecast
+comcat_catalog = comcat_catalog.filter_spatial(forecast.region)
+print(comcat_catalog)
+
+# Plot the catalog
+comcat_catalog.plot()
+
+
+catalog forecast evaluation
Fetched ComCat catalog in 0.31020069122314453 seconds.
+
+Downloaded catalog from ComCat with following parameters
+Start Date: 1992-06-28 12:00:45+00:00
+End Date: 1992-07-24 18:14:36.250000+00:00
+Min Latitude: 33.901 and Max Latitude: 36.705
+Min Longitude: -118.067 and Max Longitude: -116.285
+Min Magnitude: 4.95
+Found 19 events in the ComCat catalog.
+
+        Name: None
+
+        Start Date: 1992-06-28 12:00:45+00:00
+        End Date: 1992-07-24 18:14:36.250000+00:00
+
+        Latitude: (33.901, 36.705)
+        Longitude: (-118.067, -116.285)
+
+        Min Mw: 4.95
+        Max Mw: 6.3
+
+        Event Count: 19
+
+
+<GeoAxes: >
+
+
+
+
+

Perform number test

+

We can perform the Number test on the catalog based forecast using the observed catalog we obtained from Comcat.

+ +
Processed 1 catalogs in 0.0012161731719970703 seconds
+Processed 2 catalogs in 0.0016498565673828125 seconds
+Processed 3 catalogs in 0.0020482540130615234 seconds
+Processed 4 catalogs in 0.0023813247680664062 seconds
+Processed 5 catalogs in 0.002633333206176758 seconds
+Processed 6 catalogs in 0.0029969215393066406 seconds
+Processed 7 catalogs in 0.003269672393798828 seconds
+Processed 8 catalogs in 0.003643035888671875 seconds
+Processed 9 catalogs in 0.004563808441162109 seconds
+Processed 10 catalogs in 0.004910945892333984 seconds
+Processed 20 catalogs in 0.008572578430175781 seconds
+Processed 30 catalogs in 0.01243734359741211 seconds
+Processed 40 catalogs in 0.017767667770385742 seconds
+Processed 50 catalogs in 0.022856950759887695 seconds
+Processed 60 catalogs in 0.026311874389648438 seconds
+Processed 70 catalogs in 0.029581785202026367 seconds
+Processed 80 catalogs in 0.03316640853881836 seconds
+Processed 90 catalogs in 0.036905527114868164 seconds
+Processed 100 catalogs in 0.04017376899719238 seconds
+Processed 200 catalogs in 0.07233262062072754 seconds
+Processed 300 catalogs in 0.10814189910888672 seconds
+Processed 400 catalogs in 0.14277887344360352 seconds
+Processed 500 catalogs in 0.20580387115478516 seconds
+Processed 600 catalogs in 0.23948097229003906 seconds
+Processed 700 catalogs in 0.27393269538879395 seconds
+Processed 800 catalogs in 0.34024786949157715 seconds
+Processed 900 catalogs in 0.3736095428466797 seconds
+Processed 1000 catalogs in 0.40767502784729004 seconds
+Processed 2000 catalogs in 0.8836994171142578 seconds
+Processed 3000 catalogs in 1.3175408840179443 seconds
+Processed 4000 catalogs in 1.7736868858337402 seconds
+Processed 5000 catalogs in 2.218679904937744 seconds
+Processed 6000 catalogs in 2.6744518280029297 seconds
+Processed 7000 catalogs in 3.1464571952819824 seconds
+Processed 8000 catalogs in 3.5643150806427 seconds
+Processed 9000 catalogs in 4.060769557952881 seconds
+Processed 10000 catalogs in 4.497692346572876 seconds
+
+
+
+
+

Plot number test result

+

We can create a simple visualization of the number test from the evaluation result class.

+
ax = number_test_result.plot(show=True)
+
+
+Number Test

Total running time of the script: (0 minutes 10.215 seconds)

+ +

Gallery generated by Sphinx-Gallery

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/tutorials/gridded_forecast_evaluation.html b/tutorials/gridded_forecast_evaluation.html new file mode 100644 index 00000000..4f81e373 --- /dev/null +++ b/tutorials/gridded_forecast_evaluation.html @@ -0,0 +1,401 @@ + + + + + + + + + Grid-based Forecast Evaluation — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + +
+

Grid-based Forecast Evaluation

+

This example demonstrates how to evaluate a grid-based and time-independent forecast. Grid-based +forecasts assume the variability of the forecasts is Poissonian. Therefore, Poisson-based evaluations +should be used to evaluate grid-based forecasts.

+
+
Overview:
    +
  1. Define forecast properties (time horizon, spatial region, etc).

  2. +
  3. Obtain evaluation catalog

  4. +
  5. Apply Poissonian evaluations for grid-based forecasts

  6. +
  7. Store evaluation results using JSON format

  8. +
  9. Visualize evaluation results

  10. +
+
+
+
+

Load required libraries

+

Most of the core functionality can be imported from the top-level csep package. Utilities are available from the +csep.utils subpackage.

+
import csep
+from csep.core import poisson_evaluations as poisson
+from csep.utils import datasets, time_utils, plots
+
+# Needed to show plots from the terminal
+import matplotlib.pyplot as plt
+
+
+
+
+

Define forecast properties

+

We choose a Time-independent Forecast to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note, +the start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts +because they can be rescale to any arbitrary time period.

+
from csep.utils.stats import get_Kagan_I1_score
+
+start_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0')
+end_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0')
+
+
+
+
+

Load forecast

+

For this example, we provide the example forecast data set along with the main repository. The filepath is relative +to the root directory of the package. You can specify any file location for your forecasts.

+ +
+
+

Load evaluation catalog

+

We will download the evaluation catalog from ComCat (this step requires an internet connection). We can use the ComCat API +to filter the catalog in both time and magnitude. See the catalog filtering example, for more information on how to +filter the catalog in space and time manually.

+
print("Querying comcat catalog")
+catalog = csep.query_comcat(forecast.start_time, forecast.end_time, min_magnitude=forecast.min_magnitude)
+print(catalog)
+
+
+
Querying comcat catalog
+Fetched ComCat catalog in 6.506331920623779 seconds.
+
+Downloaded catalog from ComCat with following parameters
+Start Date: 2007-02-26 12:19:54.530000+00:00
+End Date: 2011-02-18 17:47:35.770000+00:00
+Min Latitude: 31.9788333 and Max Latitude: 41.1444
+Min Longitude: -125.0161667 and Max Longitude: -114.8398
+Min Magnitude: 4.96
+Found 34 events in the ComCat catalog.
+
+        Name: None
+
+        Start Date: 2007-02-26 12:19:54.530000+00:00
+        End Date: 2011-02-18 17:47:35.770000+00:00
+
+        Latitude: (31.9788333, 41.1444)
+        Longitude: (-125.0161667, -114.8398)
+
+        Min Mw: 4.96
+        Max Mw: 7.2
+
+        Event Count: 34
+
+
+
+
+

Filter evaluation catalog in space

+

We need to remove events in the evaluation catalog outside the valid region specified by the forecast.

+ +
Name: None
+
+Start Date: 2007-02-26 12:19:54.530000+00:00
+End Date: 2011-02-18 17:47:35.770000+00:00
+
+Latitude: (31.9788333, 41.1155)
+Longitude: (-125.0161667, -115.0481667)
+
+Min Mw: 4.96
+Max Mw: 7.2
+
+Event Count: 32
+
+
+
+
+

Compute Poisson spatial test

+

Simply call the csep.core.poisson_evaluations.spatial_test() function to evaluate the forecast using the specified +evaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose +option prints the status of the simulations to the standard output.

+
spatial_test_result = poisson.spatial_test(forecast, catalog)
+
+
+
+
+

Store evaluation results

+

PyCSEP provides easy ways of storing objects to a JSON format using csep.write_json(). The evaluations can be read +back into the program for plotting using csep.load_evaluation_result().

+
csep.write_json(spatial_test_result, 'example_spatial_test.json')
+
+
+
+
+

Plot spatial test results

+

We provide the function csep.utils.plotting.plot_poisson_consistency_test() to visualize the evaluation results from +consistency tests.

+
ax = plots.plot_poisson_consistency_test(spatial_test_result,
+                                        plot_args={'xlabel': 'Spatial likelihood'})
+plt.show()
+
+
+Poisson S-Test
+
+

Plot ROC Curves

+

We can also plot the Receiver operating characteristic (ROC) Curves based on forecast and testing-catalog. +In the figure below, True Positive Rate is the normalized cumulative forecast rate, after sorting cells in decreasing order of rate. +The “False Positive Rate” is the normalized cumulative area. +The dashed line is the ROC curve for a uniform forecast, meaning the likelihood for an earthquake to occur at any position is the same. +The further the ROC curve of a forecast is to the uniform forecast, the specific the forecast is. +When comparing the forecast ROC curve against a catalog, one can evaluate if the forecast is more or less specific (or smooth) at different level or seismic rate.

+
+
Note: This figure just shows an example of plotting an ROC curve with a catalog forecast.

If “linear=True” the diagram is represented using a linear x-axis. +If “linear=False” the diagram is represented using a logarithmic x-axis.

+
+
+
print("Plotting concentration ROC curve")
+_= plots.plot_concentration_ROC_diagram(forecast, catalog, linear=True)
+
+
+Concentration ROC Curve
Plotting concentration ROC curve
+
+
+
+
Plot ROC and Molchan curves using the alarm-based approach
+
+

In this script, we generate ROC diagrams and Molchan diagrams using the alarm-based approach to evaluate the predictive +performance of models. This method exploits contingency table analysis to evaluate the predictive capabilities of +forecasting models. By analysing the contingency table data, we determine the ROC curve and Molchan trajectory and +estimate the Area Skill Score to assess the accuracy and reliability of the prediction models. The generated graphs +visually represent the prediction performance.

+
# Note: If "linear=True" the diagram is represented using a linear x-axis.
+#       If "linear=False" the diagram is represented using a logarithmic x-axis.
+
+print("Plotting ROC curve from the contingency table")
+# Set linear True to obtain a linear x-axis, False to obtain a logical x-axis.
+_ = plots.plot_ROC_diagram(forecast, catalog, linear=True)
+
+print("Plotting Molchan curve from the contingency table and the Area Skill Score")
+# Set linear True to obtain a linear x-axis, False to obtain a logical x-axis.
+_ = plots.plot_Molchan_diagram(forecast, catalog, linear=True)
+
+
+
    +
  • ROC Curve from contingency table
  • +
  • gridded forecast evaluation
  • +
+
Plotting ROC curve from the contingency table
+Plotting Molchan curve from the contingency table and the Area Skill Score
+
+
+
+
+

Calculate Kagan’s I_1 score

+

We can also get the Kagan’s I_1 score for a gridded forecast +(see Kagan, YanY. [2009] Testing long-term earthquake forecasts: likelihood methods and error diagrams, Geophys. J. Int., v.177, pages 532-542).

+
I_1 = get_Kagan_I1_score(forecast, catalog)
+print("I_1score is: ", I_1)
+
+
+
I_1score is:  [2.31435371]
+
+
+

Total running time of the script: (0 minutes 20.346 seconds)

+ +

Gallery generated by Sphinx-Gallery

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/tutorials/index.html b/tutorials/index.html new file mode 100644 index 00000000..1231464d --- /dev/null +++ b/tutorials/index.html @@ -0,0 +1,196 @@ + + + + + + + + + Tutorials — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Tutorials

+
+

sphx_glr_tutorials_catalog_filtering.py

+
Catalogs operations
+
+

Catalog-based Forecast Evaluation

+
Catalog-based Forecast Evaluation
+
+

Grid-based Forecast Evaluation

+
Grid-based Forecast Evaluation
+
+

Plot customizations

+
Plot customizations
+
+

Plotting gridded forecast

+
Plotting gridded forecast
+
+

Quadtree Grid-based Forecast Evaluation

+
Quadtree Grid-based Forecast Evaluation
+
+

Working with catalog-based forecasts

+
Working with catalog-based forecasts
+
+
+

Gallery generated by Sphinx-Gallery

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/tutorials/plot_customizations.html b/tutorials/plot_customizations.html new file mode 100644 index 00000000..bd36fdde --- /dev/null +++ b/tutorials/plot_customizations.html @@ -0,0 +1,369 @@ + + + + + + + + + Plot customizations — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + +
+

Plot customizations

+

This example shows how to include some advanced options in the spatial visualization +of Gridded Forecasts and Evaluation Results

+
+
Overview:
    +
  1. Define optional plotting arguments

  2. +
  3. Set extent of maps

  4. +
  5. Visualizing selected magnitude bins

  6. +
  7. Plot global maps

  8. +
  9. Plot multiple Evaluation Results

  10. +
+
+
+
+

Example 1: Spatial dataset plot arguments

+

Load required libraries

+
import csep
+import cartopy
+import numpy
+from csep.utils import datasets, plots
+
+import matplotlib.pyplot as plt
+
+
+

Load a Grid Forecast from the datasets

+
forecast = csep.load_gridded_forecast(datasets.hires_ssm_italy_fname,
+                                      name='Werner, et al (2010) Italy')
+
+
+

Selecting plotting arguments

+

Create a dictionary containing the plot arguments

+
args_dict = {'title': 'Italy 10 year forecast',
+             'grid_labels': True,
+             'borders': True,
+             'feature_lw': 0.5,
+             'basemap': 'ESRI_imagery',
+             'cmap': 'rainbow',
+             'alpha_exp': 0.8,
+             'projection': cartopy.crs.Mercator()}
+
+
+

These arguments are, in order:

+
    +
  • Assign a title

  • +
  • Set labels to the geographic axes

  • +
  • Draw country borders

  • +
  • Set a linewidth of 0.5 to country borders

  • +
  • Select ESRI Imagery as a basemap.

  • +
  • Assign 'rainbow' as colormap. Possible values from from matplotlib.cm library

  • +
  • Defines 0.8 for an exponential transparency function (default is 0 for constant alpha, whereas 1 for linear).

  • +
  • An object cartopy.crs.Projection() is passed as Projection to the map

  • +
+

The complete description of plot arguments can be found in csep.utils.plots.plot_spatial_dataset()

+

Plotting the dataset

+

The map extent can be defined. Otherwise, the extent of the data would be used. The dictionary defined must be passed as argument

+
ax = forecast.plot(extent=[3, 22, 35, 48],
+                   show=True,
+                   plot_args=args_dict)
+
+
+Italy 10 year forecast
+
+

Example 2: Plot a global forecast and a selected magnitude bin range

+

Load a Global Forecast from the datasets

+

A downsampled version of the GEAR1 forecast can be found in datasets.

+
forecast = csep.load_gridded_forecast(datasets.gear1_downsampled_fname,
+                                      name='GEAR1 Forecast (downsampled)')
+
+
+

Filter by magnitudes

+

We get the rate of events of 5.95<=M_w<=7.5

+ +

We get the total rate between these magnitudes

+
rate_sum = rates_mw.sum(axis=1)
+
+
+

The data is stored in a 1D array, so it should be projected into region 2D cartesian grid.

+
rate_sum = forecast.region.get_cartesian(rate_sum)
+
+
+

Define plot arguments

+

We define the arguments and a global projection, centered at $lon=-180$

+
plot_args = {'figsize': (10,6), 'coastline':True, 'feature_color':'black',
+             'projection': cartopy.crs.Robinson(central_longitude=180.0),
+             'title': forecast.name, 'grid_labels': False,
+             'cmap': 'magma',
+             'clabel': r'$\log_{10}\lambda\left(M_w \in [{%.2f},\,{%.2f}]\right)$ per '
+                       r'${%.1f}^\circ\times {%.1f}^\circ $ per forecast period' %
+                       (low_bound, upper_bound, forecast.region.dh, forecast.region.dh)}
+
+
+

Plotting the dataset +To plot a global forecast, we must assign the option set_global=True, which is required by :ref:cartopy to handle +internally the extent of the plot

+ +GEAR1 Forecast (downsampled)
+
+

Example 3: Plot a catalog

+

Load a Catalog from ComCat

+
start_time = csep.utils.time_utils.strptime_to_utc_datetime('1995-01-01 00:00:00.0')
+end_time = csep.utils.time_utils.strptime_to_utc_datetime('2015-01-01 00:00:00.0')
+min_mag = 3.95
+catalog = csep.query_comcat(start_time, end_time, min_magnitude=min_mag, verbose=False)
+
+# **Define plotting arguments**
+plot_args = {'basemap': 'ESRI_terrain',
+             'markersize': 2,
+             'markercolor': 'red',
+             'alpha': 0.3,
+             'mag_scale': 7,
+             'legend': True,
+             'legend_loc': 3,
+             'mag_ticks': [4.0, 5.0, 6.0, 7.0]}
+
+
+
Fetched ComCat catalog in 20.1857488155365 seconds.
+
+
+

These arguments are, in order:

+
    +
  • Assign as basemap the ESRI_terrain webservice

  • +
  • Set minimum markersize of 2 with red color

  • +
  • Set a 0.3 transparency

  • +
  • mag_scale is used to exponentially scale the size with respect to magnitude. Recommended 1-8

  • +
  • Set legend True and location in 3 (lower-left corner)

  • +
  • Set a list of Magnitude ticks to display in the legend

  • +
+

The complete description of plot arguments can be found in csep.utils.plots.plot_catalog()

+
# **Plot the catalog**
+ax = catalog.plot(show=False, plot_args=plot_args)
+
+
+plot customizations
+
+

Example 4: Plot multiple evaluation results

+

Load L-test results from example .json files (See +Grid-based Forecast Evaluation for information on calculating and storing evaluation results)

+
L_results = [csep.load_evaluation_result(i) for i in datasets.l_test_examples]
+args = {'figsize': (6,5),
+        'title': r'$\mathcal{L}-\mathrm{test}$',
+        'title_fontsize': 18,
+        'xlabel': 'Log-likelihood',
+        'xticks_fontsize': 9,
+        'ylabel_fontsize': 9,
+        'linewidth': 0.8,
+        'capsize': 3,
+        'hbars':True,
+        'tight_layout': True}
+
+
+

Description of plot arguments can be found in plot_poisson_consistency_test(). +We set one_sided_lower=True as usual for an L-test, where the model is rejected if the observed +is located within the lower tail of the simulated distribution.

+
ax = plots.plot_poisson_consistency_test(L_results, one_sided_lower=True, plot_args=args)
+
+# Needed to show plots if running as script
+plt.show()
+
+
+$\mathcal{L}-\mathrm{test}$

Total running time of the script: (0 minutes 46.068 seconds)

+ +

Gallery generated by Sphinx-Gallery

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/tutorials/plot_gridded_forecast.html b/tutorials/plot_gridded_forecast.html new file mode 100644 index 00000000..5cf28b3d --- /dev/null +++ b/tutorials/plot_gridded_forecast.html @@ -0,0 +1,239 @@ + + + + + + + + + Plotting gridded forecast — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + +
+

Plotting gridded forecast

+

This example show you how to load a gridded forecast stored in the default ASCII format.

+
+

Load required libraries

+

Most of the core functionality can be imported from the top-level csep package. Utilities are available from the +csep.utils subpackage.

+
import csep
+from csep.utils import datasets, time_utils
+
+
+
+
+

Define forecast properties

+

We choose a Time-independent Forecast to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note, +the start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts +because they can be rescale to any arbitrary time period.

+
start_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0')
+end_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0')
+
+
+
+
+

Load forecast

+

For this example, we provide the example forecast data set along with the main repository. The filepath is relative +to the root directory of the package. You can specify any file location for your forecasts.

+ +
+
+

Plot forecast

+

The forecast object provides csep.core.forecasts.GriddedForecast.plot() to plot a gridded forecast. This function +returns a matplotlib axes, so more specific attributes can be set on the figure.

+
ax = forecast.plot(show=True)
+
+
+helmstetter_mainshock

Total running time of the script: (0 minutes 0.938 seconds)

+ +

Gallery generated by Sphinx-Gallery

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/tutorials/quadtree_gridded_forecast_evaluation.html b/tutorials/quadtree_gridded_forecast_evaluation.html new file mode 100644 index 00000000..d934ac08 --- /dev/null +++ b/tutorials/quadtree_gridded_forecast_evaluation.html @@ -0,0 +1,442 @@ + + + + + + + + + Quadtree Grid-based Forecast Evaluation — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + +
+

Quadtree Grid-based Forecast Evaluation

+

This example demonstrates how to create a quadtree based single resolution-grid and multi-resolution grid. +Multi-resolution grid is created using earthquake catalog, in which seismic density determines the size of a grid cell. +In creating a multi-resolution grid we select a threshold (\(N_{max}\)) as a maximum number of earthquake in each cell. +In single-resolution grid, we simply select a zoom-level (L) that determines a single resolution grid. +The number of cells in single-resolution grid are equal to \(4^L\). The zoom-level L=11 leads to 4.2 million cells, nearest to 0.1x0.1 grid.

+

We use these grids to create and evaluate a time-independent forecast. Grid-based +forecasts assume the variability of the forecasts is Poissonian. Therefore, poisson-based evaluations +should be used to evaluate grid-based forecasts defined using quadtree regions.

+
+
Overview:
    +
  1. +
    Define spatial grids
      +
    • Multi-resolution grid

    • +
    • Single-resolution grid

    • +
    +
    +
    +
  2. +
  3. +
    Load forecasts
      +
    • Multi-resolution forecast

    • +
    • Single-resolution forecast

    • +
    +
    +
    +
  4. +
  5. Load evaluation catalog

  6. +
  7. Apply Poissonian evaluations for both grid-based forecasts

  8. +
  9. Visualize evaluation results

  10. +
+
+
+
+

Load required libraries

+

Most of the core functionality can be imported from the top-level csep package. Utilities are available from the +csep.utils subpackage.

+
import numpy
+import pandas
+from csep.core import poisson_evaluations as poisson
+from csep.utils import time_utils, plots
+from csep.core.regions import QuadtreeGrid2D
+from csep.core.forecasts import GriddedForecast
+from csep.utils.time_utils import decimal_year_to_utc_epoch
+from csep.core.catalogs import CSEPCatalog
+
+
+
+
+

Load Training Catalog for Multi-resolution grid

+

We define a multi-resolution quadtree using earthquake catalog. We load a training catalog in CSEP and use that catalog to create a multi-resolution grid. +Sometimes, we do not the catalog in exact format as requried by pyCSEP. So we can read a catalog using Pandas and convert it +into the format accepable by PyCSEP. Then we instantiate an object of class CSEPCatalog by calling function csep.core.regions.CSEPCatalog.from_dataframe()

+
dfcat = pandas.read_csv('cat_train_2013.csv')
+column_name_mapper = {
+    'lon': 'longitude',
+    'lat': 'latitude',
+    'mag': 'magnitude',
+    'index': 'id'
+    }
+
+# maps the column names to the dtype expected by the catalog class
+dfcat = dfcat.reset_index().rename(columns=column_name_mapper)
+
+# create the origin_times from decimal years
+dfcat['origin_time'] = dfcat.apply(lambda row: decimal_year_to_utc_epoch(row.year), axis=1)
+
+# create catalog from dataframe
+catalog_train = CSEPCatalog.from_dataframe(dfcat)
+print(catalog_train)
+
+
+
Name: None
+
+Start Date: 1976-01-01 00:00:00+00:00
+End Date: 2013-01-01 00:00:00+00:00
+
+Latitude: (-77.16000366, 87.01999664)
+Longitude: (-180.0, 180.0)
+
+Min Mw: 5.150024414
+Max Mw: 9.08350563
+
+Event Count: 28465
+
+
+
+
+

Define Multi-resolution Gridded Region

+

Now use define a threshold for maximum number of earthquake allowed per cell, i.e. Nmax +and call csep.core.regions.QuadtreeGrid_from_catalog() to create a multi-resolution grid. +For simplicity we assume only single magnitude bin, i.e. all the earthquakes greater than and equal to 5.95

+
mbins = numpy.array([5.95])
+Nmax = 25
+r_multi = QuadtreeGrid2D.from_catalog(catalog_train, Nmax, magnitudes=mbins)
+print('Number of cells in Multi-resolution grid :', r_multi.num_nodes)
+
+
+
Number of cells in Multi-resolution grid : 3502
+
+
+
+
+

Define Single-resolution Gridded Region

+

Here as an example we define a single resolution grid at zoom-level L=6. For this purpose +we call csep.core.regions.QuadtreeGrid2D_from_single_resolution() to create a single resolution grid.

+
# For simplicity of example, we assume only single magnitude bin,
+# i.e. all the earthquakes greater than and equal to 5.95
+
+mbins = numpy.array([5.95])
+r_single = QuadtreeGrid2D.from_single_resolution(6, magnitudes=mbins)
+print('Number of cells in Single-Resolution grid :', r_single.num_nodes)
+
+
+
Number of cells in Single-Resolution grid : 4096
+
+
+
+
+

Load forecast of multi-resolution grid

+

An example time-independent forecast had been created for this grid and provided the example forecast data set along with the main repository. +We load the time-independent global forecast which has time horizon of 1 year. +The filepath is relative to the root directory of the package. You can specify any file location for your forecasts.

+
forecast_data = numpy.loadtxt('example_rate_zoom=EQ10L11.csv')
+#Reshape forecast as Nx1 array
+forecast_data = forecast_data.reshape(-1,1)
+
+forecast_multi_grid = GriddedForecast(data = forecast_data, region = r_multi, magnitudes = mbins, name = 'Example Multi-res Forecast')
+
+#The loaded forecast is for 1 year. The test catalog we will use to evaluate is for 6 years. So we can rescale the forecast.
+print(f"expected event count before scaling: {forecast_multi_grid.event_count}")
+forecast_multi_grid.scale(6)
+print(f"expected event count after scaling: {forecast_multi_grid.event_count}")
+
+
+
expected event count before scaling: 116.18568954606255
+expected event count after scaling: 697.1141372763753
+
+
+
+
+

Load forecast of single-resolution grid

+

We have already created a time-independent global forecast with time horizon of 1 year and provided with the reporsitory. +The filepath is relative to the root directory of the package. You can specify any file location for your forecasts.

+
forecast_data = numpy.loadtxt('example_rate_zoom=6.csv')
+#Reshape forecast as Nx1 array
+forecast_data = forecast_data.reshape(-1,1)
+
+forecast_single_grid = GriddedForecast(data = forecast_data, region = r_single,
+                                   magnitudes = mbins, name = 'Example Single-res Forecast')
+
+# The loaded forecast is for 1 year. The test catalog we will use is for 6 years. So we can rescale the forecast.
+print(f"expected event count before scaling: {forecast_single_grid.event_count}")
+forecast_single_grid.scale(6)
+print(f"expected event count after scaling: {forecast_single_grid.event_count}")
+
+
+
expected event count before scaling: 116.18568954606256
+expected event count after scaling: 697.1141372763753
+
+
+
+
+

Load evaluation catalog

+

We have a test catalog stored here. We can read the test catalog as a pandas frame and convert it into a format that is acceptable to PyCSEP +Then we instantiate an object of catalog

+
dfcat = pandas.read_csv('cat_test.csv')
+
+column_name_mapper = {
+    'lon': 'longitude',
+    'lat': 'latitude',
+    'mag': 'magnitude'
+    }
+
+# maps the column names to the dtype expected by the catalog class
+dfcat = dfcat.reset_index().rename(columns=column_name_mapper)
+# create the origin_times from decimal years
+dfcat['origin_time'] = dfcat.apply(lambda row: decimal_year_to_utc_epoch(row.year), axis=1)
+
+# create catalog from dataframe
+catalog = CSEPCatalog.from_dataframe(dfcat)
+print(catalog)
+
+
+
Name: None
+
+Start Date: 2014-01-01 00:00:00+00:00
+End Date: 2019-01-01 00:00:00+00:00
+
+Latitude: (-63.26, 74.39)
+Longitude: (-179.23, 179.66)
+
+Min Mw: 5.95047692260089
+Max Mw: 8.27271203001144
+
+Event Count: 651
+
+
+
+
+

Compute Poisson spatial test and Number test

+

Simply call the csep.core.poisson_evaluations.spatial_test() and csep.core.poisson_evaluations.number_test() functions to evaluate the forecast using the specified +evaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose +option prints the status of the simulations to the standard output.

+

Note: But before we use evaluation catalog, we need to link gridded region with observed catalog. +Since we have two different grids here, so we do it separately for both grids.

+
#For Multi-resolution grid, linking region to catalog.
+catalog.region = forecast_multi_grid.region
+spatial_test_multi_res_result = poisson.spatial_test(forecast_multi_grid, catalog)
+number_test_multi_res_result = poisson.number_test(forecast_multi_grid, catalog)
+
+
+#For Single-resolution grid, linking region to catalog.
+catalog.region = forecast_single_grid.region
+spatial_test_single_res_result = poisson.spatial_test(forecast_single_grid, catalog)
+number_test_single_res_result = poisson.number_test(forecast_single_grid, catalog)
+
+
+
+
+

Plot spatial test results

+

We provide the function csep.utils.plotting.plot_poisson_consistency_test() to visualize the evaluation results from +consistency tests.

+
stest_result = [spatial_test_single_res_result, spatial_test_multi_res_result]
+ax_spatial = plots.plot_poisson_consistency_test(stest_result,
+                                        plot_args={'xlabel': 'Spatial likelihood'})
+
+ntest_result = [number_test_single_res_result, number_test_multi_res_result]
+ax_number = plots.plot_poisson_consistency_test(ntest_result,
+                                        plot_args={'xlabel': 'Number of Earthquakes'})
+
+
+
    +
  • Poisson S-Test
  • +
  • Poisson N-Test
  • +
+

Total running time of the script: (0 minutes 1.811 seconds)

+ +

Gallery generated by Sphinx-Gallery

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/tutorials/working_with_catalog_forecasts.html b/tutorials/working_with_catalog_forecasts.html new file mode 100644 index 00000000..2bafee72 --- /dev/null +++ b/tutorials/working_with_catalog_forecasts.html @@ -0,0 +1,320 @@ + + + + + + + + + Working with catalog-based forecasts — pyCSEP v0.6.3 documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + +
+

Working with catalog-based forecasts

+

This example shows some basic interactions with data-based forecasts. We will load in a forecast stored in the CSEP +data format, and compute the expected rates on a 0.1° x 0.1° grid covering the state of California. We will plot the +expected rates in the spatial cells.

+
+
Overview:
    +
  1. Define forecast properties (time horizon, spatial region, etc).

  2. +
  3. Compute the expected rates in space and magnitude bins

  4. +
  5. Plot expected rates in the spatial cells

  6. +
+
+
+
+

Load required libraries

+

Most of the core functionality can be imported from the top-level csep package. Utilities are available from the +csep.utils subpackage.

+
import numpy
+
+import csep
+from csep.core import regions
+from csep.utils import datasets
+
+
+
+
+

Load data forecast

+

PyCSEP contains some basic forecasts that can be used to test of the functionality of the package. This forecast has already +been filtered to the California RELM region.

+ +
+
+

Define spatial and magnitude regions

+

Before we can compute the bin-wise rates we need to define a spatial region and a set of magnitude bin edges. The magnitude +bin edges # are the lower bound (inclusive) except for the last bin, which is treated as extending to infinity. We can +bind these # to the forecast object. This can also be done by passing them as keyword arguments +into csep.load_catalog_forecast().

+
# Magnitude bins properties
+min_mw = 4.95
+max_mw = 8.95
+dmw = 0.1
+
+# Create space and magnitude regions
+magnitudes = regions.magnitude_bins(min_mw, max_mw, dmw)
+region = regions.california_relm_region()
+
+# Bind region information to the forecast (this will be used for binning of the catalogs)
+forecast.region = regions.create_space_magnitude_region(region, magnitudes)
+
+
+
+
+

Compute spatial event counts

+

The csep.core.forecasts.CatalogForecast provides a method to compute the expected number of events in spatial cells. This +requires a region with magnitude information.

+
_ = forecast.get_expected_rates(verbose=True)
+
+
+
Processed 1 catalogs in 0.001 seconds
+Processed 2 catalogs in 0.002 seconds
+Processed 3 catalogs in 0.003 seconds
+Processed 4 catalogs in 0.003 seconds
+Processed 5 catalogs in 0.004 seconds
+Processed 6 catalogs in 0.005 seconds
+Processed 7 catalogs in 0.005 seconds
+Processed 8 catalogs in 0.006 seconds
+Processed 9 catalogs in 0.007 seconds
+Processed 10 catalogs in 0.008 seconds
+Processed 20 catalogs in 0.014 seconds
+Processed 30 catalogs in 0.021 seconds
+Processed 40 catalogs in 0.028 seconds
+Processed 50 catalogs in 0.035 seconds
+Processed 60 catalogs in 0.042 seconds
+Processed 70 catalogs in 0.048 seconds
+Processed 80 catalogs in 0.054 seconds
+Processed 90 catalogs in 0.061 seconds
+Processed 100 catalogs in 0.067 seconds
+Processed 200 catalogs in 0.129 seconds
+Processed 300 catalogs in 0.194 seconds
+Processed 400 catalogs in 0.258 seconds
+Processed 500 catalogs in 0.351 seconds
+Processed 600 catalogs in 0.414 seconds
+Processed 700 catalogs in 0.478 seconds
+Processed 800 catalogs in 0.574 seconds
+Processed 900 catalogs in 0.636 seconds
+Processed 1000 catalogs in 0.699 seconds
+Processed 2000 catalogs in 1.473 seconds
+Processed 3000 catalogs in 2.205 seconds
+Processed 4000 catalogs in 2.973 seconds
+Processed 5000 catalogs in 3.749 seconds
+Processed 6000 catalogs in 4.513 seconds
+Processed 7000 catalogs in 5.284 seconds
+Processed 8000 catalogs in 6.000 seconds
+Processed 9000 catalogs in 6.814 seconds
+Processed 10000 catalogs in 7.559 seconds
+
+
+
+
+

Plot expected event counts

+

We can plot the expected event counts the same way that we plot a csep.core.forecasts.GriddedForecast

+
ax = forecast.expected_rates.plot(plot_args={'clim': [-3.5, 0]}, show=True)
+
+
+ucerf3-landers

The images holes in the image are due to under-sampling from the forecast.

+
+
+

Quick sanity check

+

The forecasts were filtered to the spatial region so all events should be binned. We loop through each data in the forecast and +count the number of events and compare that with the expected rates. The expected rate is an average in each space-magnitude bin, so +we have to multiply this value by the number of catalogs in the forecast.

+ +

Total running time of the script: (0 minutes 9.227 seconds)

+ +

Gallery generated by Sphinx-Gallery

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file