Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

T-test: deal with corner cases that raise RuntimeWarnings #225

Open
mherrmann3 opened this issue May 20, 2023 · 1 comment
Open

T-test: deal with corner cases that raise RuntimeWarnings #225

mherrmann3 opened this issue May 20, 2023 · 1 comment

Comments

@mherrmann3
Copy link
Contributor

mherrmann3 commented May 20, 2023

I encountered four corner cases that raise RuntimeWarnings in the T-test:

1. When only one target event occurred, the variances (and thus the confidence intervals) cannot be determined

csep.poisson_evaluations._t_test_ndarray([0.01],[0.015], n_obs=1, n_f1=1.0, n_f2=1.5)

will raise:

RuntimeWarning: invalid value encountered in double_scalars
  first_term = (numpy.sum(numpy.power((X1 - X2), 2))) / (N - 1)
RuntimeWarning: invalid value encountered in double_scalars
  second_term = numpy.power(numpy.sum(X1 - X2), 2) / (numpy.power(N, 2) - N)
RuntimeWarning: invalid value encountered in double_scalars
  forecast_variance = first_term - second_term

and (correctly) return

{'t_statistic': nan,
 't_critical': nan,
 'information_gain': 0.5,
 'ig_lower': nan,
 'ig_upper': nan}

Suggested solution to avoid the RuntimeWarnings: return exactly this result if N == 1 before calculating the remaining properties.


2. When one model (or both) contain one or more zero (or negative) target event rates, the log-rates are undefined

csep.poisson_evaluations._t_test_ndarray([0, 0.01],[0.015, 0.016], n_obs=2, n_f1=2.0, n_f2=3.0)

will raise:

RuntimeWarning: divide by zero encountered in log
  X1 = numpy.log(target_event_rates1)  # Log of every element of Forecast 1
RuntimeWarning: invalid value encountered in double_scalars
  forecast_variance = first_term - second_term

and return:

{'t_statistic': nan,
 't_critical': 12.706204736432095,
 'information_gain': -inf,
 'ig_lower': nan,
 'ig_upper': nan}

(if any target event rate is negative, only the first RuntimeWarning will be issued and 'information_gain': nan)

It's not a surprise that zero (or even negative) rates cause issues in LL scores, but I believe pyCSEP does not generally check, deal, or warn in LL-based tests if forecasts contain them - except in binomial_evaluations. See issue #226 for more on that.

Be aware that the exclusion approach used in binomial_evaluations could trigger another problematic case: if all target event rates are ≤ 0, the target event array will be empty (see case 3 below).

The preferred solution is to solve issue #226. Otherwise, a simpler solution to deal with rates ≤ 0 solely in this function: return a dict containing only np.nans if (target_event_rates1 <= 0).any() or (target_event_rates2).any() at the beginning of the function. But this solution may be undesired due to a single 0 invalidating the assessment.


3. When both models (or only one) have an empty array of target event rates, the variance is zero and the T-statistic undefined (or the operation is impossible)

Subcase 3.1: both arrays are empty or one contains only a single rate:
csep.poisson_evaluations._t_test_ndarray([],[], n_obs=2, n_f1=2.0, n_f2=3.0)

or

csep.poisson_evaluations._t_test_ndarray([],[0.01], n_obs=2, n_f1=2.0, n_f2=3.0)

will raise:

RuntimeWarning: divide by zero encountered in double_scalars
  t_statistic = information_gain / (forecast_std / numpy.sqrt(N))

and return an unexpected result:

{'t_statistic': inf,
 't_critical': 12.706204736432095,
 'information_gain': 0.5,  # <-- weird to have an information gain without target events
 'ig_lower': 0.5,  # (also weird)
 'ig_upper': 0.5}  # (also weird)
Subcase 3.2: only one array is empty and the other has more than one entry
csep.poisson_evaluations._t_test_ndarray([],[0.01, 0.01], n_obs=2, n_f1=2.0, n_f2=3.0)

will cause:

ValueError: operands could not be broadcast together with shapes (0,) (2,)  

I believe both subcases should currently not be a problem when performing the T-test via csep.poisson_evaluations.paired_t_test (b/c any target bin of a csep.core.forecasts.GriddedForecast object should have associated rates - even if they are ≤ 0). But I used _t_test_ndarray in custom testing routines. Of course, it is a private-like method supposed to be only used by pyCSEP so we may not care about this case, but this result is quite unexpected and we may want to avoid it.

Again, be aware that these subcases (especially 3.2) may occur if employing an exclusion of forecast rates ≤ 0 as currently used in binomial_evaluations (see case 2 above and issue #226).

Suggested solution: return a dict containing only np.nans if not (target_event_rates1.size and target_event_rates2.size) at the beginning of the function.


4. When all target event rates are the same of each model for more than one target event, the variance is zero and the T-statistic undefined

csep.poisson_evaluations._t_test_ndarray([0.01, 0.01], [0.015, 0.015], n_obs=2, n_f1=2.0, n_f2=3.0)

will raise:

RuntimeWarning: divide by zero encountered in double_scalars
  t_statistic = information_gain / (forecast_std / numpy.sqrt(N))

and return:

{'t_statistic': inf,
 't_critical': 12.706204736432095,
 'information_gain': 0.09453489189183628,
 'ig_lower': 0.09453489189183628,
 'ig_upper': 0.09453489189183628}

It will happen if all target events occur in the same bin.

Suggested solution: only calculate t_statistic if forecast_variance (or if forecast_std), otherwise set it to np.nan. ig_lower and ig_upper should also be np.nan.

@mherrmann3 mherrmann3 changed the title T-test: deal with corner cases that raise RuntimeWarnings (+ how to generally deal with forecast rates ≤ 0?) T-test: deal with corner cases that raise RuntimeWarnings (+ how to deal with forecast rates ≤ 0 in any LL-based test?) May 20, 2023
@mherrmann3 mherrmann3 changed the title T-test: deal with corner cases that raise RuntimeWarnings (+ how to deal with forecast rates ≤ 0 in any LL-based test?) T-test: deal with corner cases that raise RuntimeWarnings May 21, 2023
@pabloitu
Copy link
Collaborator

nice work here!! Yes, we gotta address this. Maybe before the release?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

No branches or pull requests

2 participants