Skip to content

Commit

Permalink
Fix CI failing tests (#416)
Browse files Browse the repository at this point in the history
* Use auto mode for asyncio tests

* Drop testing python 3.8

* Set build.os

* Set build options for readthedocs

* Switch everywhere to 3.8
  • Loading branch information
lukaszo authored Oct 31, 2023
1 parent ef21317 commit afe8e56
Show file tree
Hide file tree
Showing 12 changed files with 25 additions and 57 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ jobs:
fail-fast: true
matrix:
os: ["ubuntu-latest"]
python-version: ["3.8", "3.9", "3.10"]
python-version: ["3.9", "3.10"]

steps:
- name: Checkout source
Expand Down Expand Up @@ -50,7 +50,7 @@ jobs:
uses: conda-incubator/setup-miniconda@v2
with:
miniconda-version: "latest"
python-version: "3.8"
python-version: "3.9"

- name: Run import tests
shell: bash -l {0}
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@ jobs:
- name: Checkout source
uses: actions/checkout@v2

- name: Set up Python 3.8
- name: Set up Python 3.9
uses: actions/setup-python@v1
with:
python-version: 3.8
python-version: 3.9

- name: Install pypa/build
run: python -m pip install build wheel
Expand Down
6 changes: 5 additions & 1 deletion .readthedocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@ sphinx:
formats: all

python:
version: "3.8"
install:
- method: pip
path: .
Expand All @@ -16,3 +15,8 @@ python:

submodules:
include: all

build:
os: ubuntu-22.04
tools:
python: "3"
38 changes: 0 additions & 38 deletions ci/environment-3.8.yml

This file was deleted.

4 changes: 2 additions & 2 deletions ci/scripts/test_imports.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ set -o errexit


test_import () {
echo "Create environment: python=3.8 $1"
echo "Create environment: python=3.9 $1"
# Create an empty environment
conda create -q -y -n test-imports -c conda-forge python=3.8
conda create -q -y -n test-imports -c conda-forge python=3.9
conda activate test-imports
pip install -e .[$1]
echo "python -c '$2'"
Expand Down
6 changes: 3 additions & 3 deletions dask_cloudprovider/aws/tests/test_ec2.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ async def cluster_rapids():
# Deep Learning AMI (Ubuntu 18.04)
ami="ami-0c7c7d78f752f8f17",
# Python version must match local version and CUDA version must match AMI CUDA version
docker_image="rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.8",
docker_image="rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.9",
instance_type="p3.2xlarge",
bootstrap=False,
filesystem_size=120,
Expand All @@ -65,7 +65,7 @@ async def cluster_rapids_packer():
# Packer AMI
ami="ami-04e5539cb82859e69",
# Python version must match local version and CUDA version must match AMI CUDA version
docker_image="rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.8",
docker_image="rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.9",
instance_type="p3.2xlarge",
bootstrap=False,
filesystem_size=120,
Expand Down Expand Up @@ -202,7 +202,7 @@ async def test_get_cloud_init_rapids():
# Deep Learning AMI (Ubuntu 18.04)
ami="ami-0c7c7d78f752f8f17",
# Python version must match local version and CUDA version must match AMI CUDA version
docker_image="rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.8",
docker_image="rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.9",
instance_type="p3.2xlarge",
bootstrap=False,
filesystem_size=120,
Expand Down
2 changes: 1 addition & 1 deletion dask_cloudprovider/azure/azurevm.py
Original file line number Diff line number Diff line change
Expand Up @@ -433,7 +433,7 @@ class AzureVMCluster(VMCluster):
... security_group="<security group>",
... n_workers=1,
... vm_size="Standard_NC12s_v3", # Or any NVIDIA GPU enabled size
... docker_image="rapidsai/rapidsai:cuda11.0-runtime-ubuntu18.04-py3.8",
... docker_image="rapidsai/rapidsai:cuda11.0-runtime-ubuntu18.04-py3.9",
... worker_class="dask_cuda.CUDAWorker")
>>> from dask.distributed import Client
>>> client = Client(cluster)
Expand Down
2 changes: 1 addition & 1 deletion dask_cloudprovider/azure/tests/test_azurevm.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ async def test_create_rapids_cluster_sync():

with AzureVMCluster(
vm_size="Standard_NC12s_v3",
docker_image="rapidsai/rapidsai:cuda11.0-runtime-ubuntu18.04-py3.8",
docker_image="rapidsai/rapidsai:cuda11.0-runtime-ubuntu18.04-py3.9",
worker_class="dask_cuda.CUDAWorker",
worker_options={"rmm_pool_size": "15GB"},
) as cluster:
Expand Down
4 changes: 2 additions & 2 deletions dask_cloudprovider/gcp/tests/test_gcp.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ async def test_create_rapids_cluster():
filesystem_size=50,
ngpus=2,
gpu_type="nvidia-tesla-t4",
docker_image="rapidsai/rapidsai:cuda11.0-runtime-ubuntu18.04-py3.8",
docker_image="rapidsai/rapidsai:cuda11.0-runtime-ubuntu18.04-py3.9",
worker_class="dask_cuda.CUDAWorker",
worker_options={"rmm_pool_size": "15GB"},
asynchronous=True,
Expand Down Expand Up @@ -168,7 +168,7 @@ def test_create_rapids_cluster_sync():
filesystem_size=50,
ngpus=2,
gpu_type="nvidia-tesla-t4",
docker_image="rapidsai/rapidsai:cuda11.0-runtime-ubuntu18.04-py3.8",
docker_image="rapidsai/rapidsai:cuda11.0-runtime-ubuntu18.04-py3.9",
worker_class="dask_cuda.CUDAWorker",
worker_options={"rmm_pool_size": "15GB"},
asynchronous=False,
Expand Down
8 changes: 4 additions & 4 deletions doc/source/packer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ To launch `RAPIDS <https://rapids.ai/>`_ on AWS EC2 we can select a GPU instance
cluster = EC2Cluster(
ami="ami-0c7c7d78f752f8f17", # Deep Learning AMI (this ID varies by region so find yours in the AWS Console)
docker_image="rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.8",
docker_image="rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.9",
instance_type="p3.2xlarge",
bootstrap=False, # Docker is already installed on the Deep Learning AMI
filesystem_size=120,
Expand Down Expand Up @@ -263,7 +263,7 @@ pull the RAPIDS Docker image. That way when a scheduler or worker VM is created
{
"type": "shell",
"inline": [
"docker pull rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.8"
"docker pull rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.9"
]
}
]
Expand Down Expand Up @@ -315,12 +315,12 @@ We can then run our code snippet again but this time it will take less than 5 mi
cluster = EC2Cluster(
ami="ami-04e5539cb82859e69", # AMI ID provided by Packer
docker_image="rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.8",
docker_image="rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.9",
instance_type="p3.2xlarge",
bootstrap=False,
filesystem_size=120,
)
cluster.scale(2)
client = Client(cluster)
# Your cluster is ready to use
# Your cluster is ready to use
2 changes: 2 additions & 0 deletions pytest.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[pytest]
asyncio_mode = auto
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,5 +37,5 @@
[console_scripts]
dask-ecs=dask_cloudprovider.cli.ecs:go
""",
python_requires=">=3.8",
python_requires=">=3.9",
)

0 comments on commit afe8e56

Please sign in to comment.