Skip to content

Commit

Permalink
Pin numpy below v2.0.0 (#1783)
Browse files Browse the repository at this point in the history
### What kind of change does this PR introduce?

* Pins NumPy below v2.0.0 until `xclim` can be adjusted to the new
API/ABI
* Adds the NumPy development branch to the upstream repositories

### Does this PR introduce a breaking change?

Yes, `numpy` is now pinned.

### Other information:

https://numpy.org/devdocs/release/2.0.0-notes.html
  • Loading branch information
Zeitsperre authored Jun 17, 2024
2 parents b7374de + 61ca7a8 commit bbd3267
Show file tree
Hide file tree
Showing 6 changed files with 6 additions and 13 deletions.
2 changes: 2 additions & 0 deletions CHANGES.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ Breaking changes
- ``interp_calendar`` : Use ``Dataset.interp_calendar`` or ``xarray.coding.calendar_ops.interp_calendar`` instead.
- ``days_in_year`` : Use ``xarray.coding.calendar_ops._days_in_year`` instead.
- ``datetime_to_decimal_year`` : Use ``xarray.coding.calendar_ops._datetime_to_decimal_year`` instead.
* `numpy` has been pinned below v2.0.0 until `xclim` can be updated to support the latest version. (:pull:`1783`).

Internal changes
^^^^^^^^^^^^^^^^
Expand All @@ -36,6 +37,7 @@ Internal changes
Bug fixes
^^^^^^^^^
* ``xclim.indices.{cold|hot}_spell_total_length`` now properly uses the argument `window` to only count spells with at least `window` time steps. (:issue:`1765`, :pull:`1777`).
* Addressed an error in ``xclim.ensembles._filters._concat_hist`` where remnants of a scenario selection were not being dropped properly. (:pull:`1780`).

v0.49.0 (2024-05-02)
--------------------
Expand Down
2 changes: 1 addition & 1 deletion environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ dependencies:
- dask >=2.6.0
- jsonpickle
- numba
- numpy >=1.20.0
- numpy >=1.20.0,<2.0.0
- pandas >=2.2.0
- pint >=0.10,<0.24
- poppler >=0.67
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ dependencies = [
"dask[array]>=2.6",
"jsonpickle",
"numba",
"numpy>=1.20.0",
"numpy>=1.20.0,<2.0.0",
"pandas>=2.2",
"pint>=0.10,<0.24",
"pyarrow", # Strongly encouraged for pandas v2.2.0+
Expand Down
1 change: 1 addition & 0 deletions requirements_upstream.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,5 @@ bottleneck @ git+https://github.com/pydata/bottleneck.git@master
cftime @ git+https://github.com/Unidata/cftime.git@master
flox @ git+https://github.com/xarray-contrib/flox.git@main
numba @ git+https://github.com/numba/numba.git@main
numpy @ git+https://github.com/numpy/numpy.git@main
xarray @ git+https://github.com/pydata/xarray.git@main
10 changes: 0 additions & 10 deletions tests/test_partitioning.py
Original file line number Diff line number Diff line change
@@ -1,24 +1,14 @@
from __future__ import annotations

import warnings

import numpy as np
import pytest
import xarray as xr
from packaging.version import Version

from xclim.ensembles import fractional_uncertainty, hawkins_sutton, lafferty_sriver
from xclim.ensembles._filters import _concat_hist, _model_in_all_scens, _single_member


# FIXME: Investigate why _concat_hist() fails on xarray 2024.5.0
def test_hawkins_sutton_smoke(open_dataset):
"""Just a smoke test."""
if Version(xr.__version__) == Version("2024.5.0"):
pytest.skip("xarray 2024.5.0 does not support `_concat_hist()` here.")
if Version(xr.__version__) > Version("2024.5.0"):
warnings.warn("FIXME: Remove this warning if this test is passing.")

dims = {"run": "member", "scen": "scenario"}
da = (
open_dataset("uncertainty_partitioning/cmip5_pr_global_mon.nc")
Expand Down
2 changes: 1 addition & 1 deletion xclim/ensembles/_filters.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ def _concat_hist(da: xr.DataArray, **hist) -> xr.DataArray:
((dim, _),) = hist.items()

# Select historical scenario and drop it from the data
h = da.sel(**hist).dropna("time", how="all")
h = da.sel(drop=True, **hist).dropna("time", how="all")
ens = da.drop_sel(**hist)

index = ens[dim]
Expand Down

0 comments on commit bbd3267

Please sign in to comment.