Skip to content

Commit

Permalink
Drop Python 3.9 support (#16637)
Browse files Browse the repository at this point in the history
Contributes to rapidsai/build-planning#88

Finishes the work of dropping Python 3.9 support.

This project stopped building / testing against Python 3.9 as of rapidsai/shared-workflows#235.
This PR updates configuration and docs to reflect that.

## Notes for Reviewers

### How I tested this

Checked that there were no remaining uses like this:

```shell
git grep -E '3\.9'
git grep '39'
git grep 'py39'
```

And similar for variations on Python 3.8 (to catch things that were missed the last time this was done).

Authors:
  - James Lamb (https://github.com/jameslamb)

Approvers:
  - Bradley Dice (https://github.com/bdice)
  - Lawrence Mitchell (https://github.com/wence-)

URL: #16637
  • Loading branch information
jameslamb authored Aug 27, 2024
1 parent 115ddce commit efa9770
Show file tree
Hide file tree
Showing 15 changed files with 47 additions and 39 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ conda install -c rapidsai -c conda-forge -c nvidia \
We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD
of our latest development branch.

Note: cuDF is supported only on Linux, and with Python versions 3.9 and later.
Note: cuDF is supported only on Linux, and with Python versions 3.10 and later.

See the [RAPIDS installation guide](https://docs.rapids.ai/install) for more OS and version info.

Expand Down
2 changes: 1 addition & 1 deletion conda/environments/all_cuda-118_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ dependencies:
- pytest-xdist
- pytest<8
- python-confluent-kafka>=1.9.0,<1.10.0a0
- python>=3.9,<3.12
- python>=3.10,<3.12
- pytorch>=2.1.0
- rapids-build-backend>=0.3.0,<0.4.0.dev0
- rapids-dask-dependency==24.10.*,>=0.0.0a0
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/all_cuda-125_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ dependencies:
- pytest-xdist
- pytest<8
- python-confluent-kafka>=1.9.0,<1.10.0a0
- python>=3.9,<3.12
- python>=3.10,<3.12
- pytorch>=2.1.0
- rapids-build-backend>=0.3.0,<0.4.0.dev0
- rapids-dask-dependency==24.10.*,>=0.0.0a0
Expand Down
2 changes: 1 addition & 1 deletion cpp/cmake/thirdparty/get_arrow.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ function(find_libarrow_in_python_wheel PYARROW_VERSION)
APPEND
initial_code_block
[=[
find_package(Python 3.9 REQUIRED COMPONENTS Interpreter)
find_package(Python 3.10 REQUIRED COMPONENTS Interpreter)
execute_process(
COMMAND "${Python_EXECUTABLE}" -c "import pyarrow; print(pyarrow.get_library_dirs()[0])"
OUTPUT_VARIABLE CUDF_PYARROW_WHEEL_DIR
Expand Down
6 changes: 1 addition & 5 deletions dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -584,10 +584,6 @@ dependencies:
specific:
- output_types: conda
matrices:
- matrix:
py: "3.9"
packages:
- python=3.9
- matrix:
py: "3.10"
packages:
Expand All @@ -598,7 +594,7 @@ dependencies:
- python=3.11
- matrix:
packages:
- python>=3.9,<3.12
- python>=3.10,<3.12
run_common:
common:
- output_types: [conda, requirements, pyproject]
Expand Down
3 changes: 1 addition & 2 deletions python/cudf/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ authors = [
{ name = "NVIDIA Corporation" },
]
license = { text = "Apache 2.0" }
requires-python = ">=3.9"
requires-python = ">=3.10"
dependencies = [
"cachetools",
"cubinlinker",
Expand All @@ -42,7 +42,6 @@ classifiers = [
"Topic :: Scientific/Engineering",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
]
Expand Down
2 changes: 1 addition & 1 deletion python/cudf_kafka/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ authors = [
{ name = "NVIDIA Corporation" },
]
license = { text = "Apache 2.0" }
requires-python = ">=3.9"
requires-python = ">=3.10"
dependencies = [
"cudf==24.10.*,>=0.0.0a0",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
Expand Down
13 changes: 8 additions & 5 deletions python/cudf_polars/cudf_polars/containers/dataframe.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,9 @@ def from_polars(cls, df: pl.DataFrame) -> Self:
return cls(
[
NamedColumn(column, h_col.name).copy_metadata(h_col)
for column, h_col in zip(d_table.columns(), df.iter_columns())
for column, h_col in zip(
d_table.columns(), df.iter_columns(), strict=True
)
]
)

Expand Down Expand Up @@ -134,8 +136,10 @@ def from_table(cls, table: plc.Table, names: Sequence[str]) -> Self:
if table.num_columns() != len(names):
raise ValueError("Mismatching name and table length.")
return cls(
# TODO: strict=True when we drop py39
[NamedColumn(c, name) for c, name in zip(table.columns(), names)]
[
NamedColumn(c, name)
for c, name in zip(table.columns(), names, strict=True)
]
)

def sorted_like(
Expand Down Expand Up @@ -165,8 +169,7 @@ def sorted_like(
subset = self.column_names_set if subset is None else subset
self.columns = [
c.sorted_like(other) if c.name in subset else c
# TODO: strict=True when we drop py39
for c, other in zip(self.columns, like.columns)
for c, other in zip(self.columns, like.columns, strict=True)
]
return self

Expand Down
27 changes: 18 additions & 9 deletions python/cudf_polars/cudf_polars/dsl/ir.py
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,8 @@ def evaluate(self, *, cache: MutableMapping[int, DataFrame]) -> DataFrame:
*(
(piece.tbl, piece.column_names(include_children=False))
for piece in pieces
)
),
strict=True,
)
df = DataFrame.from_table(
plc.concatenate.concatenate(list(tables)),
Expand Down Expand Up @@ -426,7 +427,8 @@ def evaluate(self, *, cache: MutableMapping[int, DataFrame]) -> DataFrame:
pdf = pdf.select(self.projection)
df = DataFrame.from_polars(pdf)
assert all(
c.obj.type() == dtype for c, dtype in zip(df.columns, self.schema.values())
c.obj.type() == dtype
for c, dtype in zip(df.columns, self.schema.values(), strict=True)
)
if self.predicate is not None:
(mask,) = broadcast(self.predicate.evaluate(df), target_length=df.num_rows)
Expand Down Expand Up @@ -600,9 +602,10 @@ def evaluate(self, *, cache: MutableMapping[int, DataFrame]) -> DataFrame:
for i, table in enumerate(raw_tables):
(column,) = table.columns()
raw_columns.append(NamedColumn(column, f"tmp{i}"))
mapping = dict(zip(replacements, raw_columns))
mapping = dict(zip(replacements, raw_columns, strict=True))
result_keys = [
NamedColumn(gk, k.name) for gk, k in zip(group_keys.columns(), keys)
NamedColumn(gk, k.name)
for gk, k in zip(group_keys.columns(), keys, strict=True)
]
result_subs = DataFrame(raw_columns)
results = [
Expand Down Expand Up @@ -752,7 +755,9 @@ def evaluate(self, *, cache: MutableMapping[int, DataFrame]) -> DataFrame:
columns = plc.join.cross_join(left.table, right.table).columns()
left_cols = [
NamedColumn(new, old.name).sorted_like(old)
for new, old in zip(columns[: left.num_columns], left.columns)
for new, old in zip(
columns[: left.num_columns], left.columns, strict=True
)
]
right_cols = [
NamedColumn(
Expand All @@ -761,7 +766,9 @@ def evaluate(self, *, cache: MutableMapping[int, DataFrame]) -> DataFrame:
if old.name not in left.column_names_set
else f"{old.name}{suffix}",
)
for new, old in zip(columns[left.num_columns :], right.columns)
for new, old in zip(
columns[left.num_columns :], right.columns, strict=True
)
]
return DataFrame([*left_cols, *right_cols])
# TODO: Waiting on clarity based on https://github.com/pola-rs/polars/issues/17184
Expand Down Expand Up @@ -803,6 +810,7 @@ def evaluate(self, *, cache: MutableMapping[int, DataFrame]) -> DataFrame:
for left_col, right_col in zip(
left.select_columns(left_on.column_names_set),
right.select_columns(right_on.column_names_set),
strict=True,
)
)
)
Expand Down Expand Up @@ -909,7 +917,7 @@ def evaluate(self, *, cache: MutableMapping[int, DataFrame]) -> DataFrame:
result = DataFrame(
[
NamedColumn(c, old.name).sorted_like(old)
for c, old in zip(table.columns(), df.columns)
for c, old in zip(table.columns(), df.columns, strict=True)
]
)
if keys_sorted or self.stable:
Expand Down Expand Up @@ -974,7 +982,8 @@ def evaluate(self, *, cache: MutableMapping[int, DataFrame]) -> DataFrame:
self.null_order,
)
columns = [
NamedColumn(c, old.name) for c, old in zip(table.columns(), df.columns)
NamedColumn(c, old.name)
for c, old in zip(table.columns(), df.columns, strict=True)
]
# If a sort key is in the result table, set the sortedness property
for k, i in enumerate(keys_in_result):
Expand Down Expand Up @@ -1089,7 +1098,7 @@ def evaluate(self, *, cache: MutableMapping[int, DataFrame]) -> DataFrame:
# final tag is "swapping" which is useful for the
# optimiser (it blocks some pushdown operations)
old, new, _ = self.options
return df.rename_columns(dict(zip(old, new)))
return df.rename_columns(dict(zip(old, new, strict=True)))
elif self.name == "explode":
df = self.df.evaluate(cache=cache)
((to_explode,),) = self.options
Expand Down
4 changes: 1 addition & 3 deletions python/cudf_polars/cudf_polars/typing/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,7 @@
from polars.polars import _expr_nodes as pl_expr, _ir_nodes as pl_ir

if TYPE_CHECKING:
from typing import Callable

from typing_extensions import TypeAlias
from typing import Callable, TypeAlias

import polars as pl

Expand Down
2 changes: 1 addition & 1 deletion python/cudf_polars/cudf_polars/utils/sorting.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ def sort_order(
null_precedence = []
if len(descending) != len(nulls_last) or len(descending) != num_keys:
raise ValueError("Mismatching length of arguments in sort_order")
for asc, null_last in zip(column_order, nulls_last):
for asc, null_last in zip(column_order, nulls_last, strict=True):
if (asc == plc.types.Order.ASCENDING) ^ (not null_last):
null_precedence.append(plc.types.NullOrder.AFTER)
elif (asc == plc.types.Order.ASCENDING) ^ null_last:
Expand Down
12 changes: 9 additions & 3 deletions python/cudf_polars/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ authors = [
{ name = "NVIDIA Corporation" },
]
license = { text = "Apache 2.0" }
requires-python = ">=3.9"
requires-python = ">=3.10"
dependencies = [
"polars>=1.0,<1.3",
"pylibcudf==24.10.*,>=0.0.0a0",
Expand All @@ -28,7 +28,6 @@ classifiers = [
"Topic :: Scientific/Engineering",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
]
Expand Down Expand Up @@ -62,7 +61,7 @@ exclude_also = [
[tool.ruff]
line-length = 88
indent-width = 4
target-version = "py39"
target-version = "py310"
fix = true

[tool.ruff.lint]
Expand Down Expand Up @@ -115,6 +114,9 @@ ignore = [
"TD003", # Missing issue link on the line following this TODO
# tryceratops
"TRY003", # Avoid specifying long messages outside the exception class
# pyupgrade
"UP035", # Import from `collections.abc` instead: `Callable`
"UP038", # Use `X | Y` in `isinstance` call instead of `(X, Y)`
# Lints below are turned off because of conflicts with the ruff
# formatter
# See https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
Expand All @@ -137,6 +139,10 @@ fixable = ["ALL"]

[tool.ruff.lint.per-file-ignores]
"**/tests/**/*.py" = ["D"]
"**/cudf_polars/typing/__init__.py" = [
# pyupgrade
"UP007", # Use `X | Y` for type annotations
]

[tool.ruff.lint.flake8-pytest-style]
# https://docs.astral.sh/ruff/settings/#lintflake8-pytest-style
Expand Down
3 changes: 1 addition & 2 deletions python/custreamz/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ authors = [
{ name = "NVIDIA Corporation" },
]
license = { text = "Apache 2.0" }
requires-python = ">=3.9"
requires-python = ">=3.10"
dependencies = [
"confluent-kafka>=1.9.0,<1.10.0a0",
"cudf==24.10.*,>=0.0.0a0",
Expand All @@ -31,7 +31,6 @@ classifiers = [
"Topic :: Apache Kafka",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
]
Expand Down
3 changes: 1 addition & 2 deletions python/dask_cudf/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ authors = [
{ name = "NVIDIA Corporation" },
]
license = { text = "Apache 2.0" }
requires-python = ">=3.9"
requires-python = ">=3.10"
dependencies = [
"cudf==24.10.*,>=0.0.0a0",
"cupy-cuda11x>=12.0.0",
Expand All @@ -32,7 +32,6 @@ classifiers = [
"Topic :: Scientific/Engineering",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
]
Expand Down
3 changes: 1 addition & 2 deletions python/pylibcudf/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ authors = [
{ name = "NVIDIA Corporation" },
]
license = { text = "Apache 2.0" }
requires-python = ">=3.9"
requires-python = ">=3.10"
dependencies = [
"cuda-python>=11.7.1,<12.0a0",
"libcudf==24.10.*,>=0.0.0a0",
Expand All @@ -32,7 +32,6 @@ classifiers = [
"Topic :: Scientific/Engineering",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
]
Expand Down

0 comments on commit efa9770

Please sign in to comment.