generated from astronomer/airflow-provider-sample
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable running all integration tests against Kind in the CI #95
Draft
tatiana
wants to merge
9
commits into
main
Choose a base branch
from
issue-81-enable-integration-tests
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
tatiana
commented
Nov 27, 2024
@@ -1,7 +1,7 @@ | |||
name: test | |||
on: | |||
push: | |||
branches: [ "main" ] | |||
branches: [ "main", "issue-81-enable-integration-tests" ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested change
branches: [ "main", "issue-81-enable-integration-tests" ] | |
branches: [ "main" ] |
This was referenced Nov 28, 2024
tatiana
added a commit
that referenced
this pull request
Nov 29, 2024
The Ray provider 0.2.1 allowed users to define a hard-coded configuration to materialize the Kubernetes cluster. This PR aims to enable users to define a function that can receive the Airflow context and generate the configuration dynamically using context properties. This request came from an Astronomer customer. There is an example DAG file illustrating how to use this feature. It has a parent DAG that triggers two child DAGs, which leverage the just introduced `@ray.task` callable configuration. The screenshots below show their success, when using the [local development instructions](https://github.com/astronomer/astro-provider-ray/blob/main/docs/getting_started/local_development_setup.rst) using Astro CLI. Parent DAG: <img width="1624" alt="Screenshot 2024-11-29 at 12 15 13" src="https://github.com/user-attachments/assets/586b4575-ee62-4344-bbd7-a1a6423360ce"> Child 1 DAG: <img width="1624" alt="Screenshot 2024-11-29 at 12 15 56" src="https://github.com/user-attachments/assets/23d89288-c68a-498e-848a-743fb2684c4f"> Example of logs that illustrate the RayCluster using dynamic configuration was created and used in Kubernetes, with its own IP address: ``` (...) [2024-11-29T12:14:52.276+0000] {standard_task_runner.py:104} INFO - Running: ['airflow', 'tasks', 'run', 'ray_dynamic_config_child_1', 'process_data_with_ray', 'manual__2024-11-29T12:14:50.273712+00:00', '--job-id', '773', '--raw', '--subdir', 'DAGS_FOLDER/ray_dynamic_config.py', '--cfg-path', '/tmp/tmpkggwlv23'] [2024-11-29T12:14:52.278+0000] {logging_mixin.py:190} WARNING - /usr/local/lib/python3.12/site-packages/airflow/task/task_runner/standard_task_runner.py:70 DeprecationWarning: This process (pid=238) is multi-threaded, use of fork() may lead to deadlocks in the child. (...) [2024-11-29T12:14:52.745+0000] {decorators.py:94} INFO - Using the following config {'conn_id': 'ray_conn', 'runtime_env': {'working_dir': '/usr/local/airflow/dags/ray_scripts', 'pip': ['numpy']}, 'num_cpus': 1, 'num_gpus': 0, 'memory': 0, 'poll_interval': 5, 'ray_cluster_yaml': '/usr/local/airflow/dags/scripts/first-254.yaml', 'xcom_task_key': 'dashboard'} (...) [2024-11-29T12:14:55.430+0000] {hooks.py:474} INFO - ::group::Create Ray Cluster [2024-11-29T12:14:55.430+0000] {hooks.py:475} INFO - Loading yaml content for Ray cluster CRD... [2024-11-29T12:14:55.451+0000] {hooks.py:410} INFO - Creating new Ray cluster: first-254 [2024-11-29T12:14:55.456+0000] {hooks.py:494} INFO - ::endgroup:: (...) [2024-11-29T12:14:55.663+0000] {hooks.py:498} INFO - ::group::Setup Load Balancer service [2024-11-29T12:14:55.663+0000] {hooks.py:334} INFO - Attempt 1: Checking LoadBalancer status... [2024-11-29T12:14:55.669+0000] {hooks.py:278} ERROR - Error getting service first-254-head-svc: (404) Reason: Not Found HTTP response headers: HTTPHeaderDict({'Audit-Id': '81b07ac4-db3b-48a6-b336-f52ae93bee55', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '955e8bb0-08b1-4d45-a768-e49387a9767c', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'd5240328-288d-4366-b094-d8fd793c7431', 'Date': 'Fri, 29 Nov 2024 12:14:55 GMT', 'Content-Length': '212'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"services \"first-254-head-svc\" not found","reason":"NotFound","details":{"name":"first-254-head-svc","kind":"services"},"code":404} [2024-11-29T12:14:55.669+0000] {hooks.py:355} INFO - LoadBalancer service is not available yet... [2024-11-29T12:15:35.670+0000] {hooks.py:334} INFO - Attempt 2: Checking LoadBalancer status... [2024-11-29T12:15:35.688+0000] {hooks.py:348} INFO - LoadBalancer is ready. [2024-11-29T12:15:35.688+0000] {hooks.py:441} INFO - {'ip': '172.18.255.1', 'hostname': None, 'ports': [{'name': 'client', 'port': 10001}, {'name': 'dashboard', 'port': 8265}, {'name': 'gcs', 'port': 6379}, {'name': 'metrics', 'port': 8080}, {'name': 'serve', 'port': 8000}], 'working_address': '172.18.255.1'} (...) [2024-11-29T12:15:38.345+0000] {triggers.py:124} INFO - ::group:: Trigger 1/2: Checking the job status [2024-11-29T12:15:38.345+0000] {triggers.py:125} INFO - Polling for job raysubmit_paxAkyLiKxEHPmwG every 5 seconds... (...) [2024-11-29T12:15:38.354+0000] {hooks.py:156} INFO - Dashboard URL is: http://172.18.255.1:8265 [2024-11-29T12:15:38.361+0000] {hooks.py:208} INFO - Job raysubmit_paxAkyLiKxEHPmwG status: PENDING [2024-11-29T12:15:38.361+0000] {triggers.py:100} INFO - Status of job raysubmit_paxAkyLiKxEHPmwG is: PENDING [2024-11-29T12:15:38.361+0000] {triggers.py:108} INFO - ::group::raysubmit_paxAkyLiKxEHPmwG logs [2024-11-29T12:15:43.416+0000] {hooks.py:208} INFO - Job raysubmit_paxAkyLiKxEHPmwG status: RUNNING [2024-11-29T12:15:43.416+0000] {triggers.py:100} INFO - Status of job raysubmit_paxAkyLiKxEHPmwG is: RUNNING [2024-11-29T12:15:43.417+0000] {triggers.py:112} INFO - 2024-11-29 04:15:40,813 INFO worker.py:1429 -- Using address 10.244.0.140:6379 set in the environment variable RAY_ADDRESS [2024-11-29T12:15:43.417+0000] {triggers.py:112} INFO - 2024-11-29 04:15:40,814 INFO worker.py:1564 -- Connecting to existing Ray cluster at address: 10.244.0.140:6379... [2024-11-29T12:15:43.417+0000] {triggers.py:112} INFO - 2024-11-29 04:15:40,820 INFO worker.py:1740 -- Connected to Ray cluster. View the dashboard at �[1m�[32m10.244.0.140:8265 �[39m�[22m [2024-11-29T12:15:48.430+0000] {hooks.py:208} INFO - Job raysubmit_paxAkyLiKxEHPmwG status: SUCCEEDED [2024-11-29T12:15:48.430+0000] {triggers.py:112} INFO - Mean of this population is 12.0 [2024-11-29T12:15:48.430+0000] {triggers.py:112} INFO - �[36m(autoscaler +5s)�[0m Tip: use `ray status` to view detailed cluster status. To disable these messages, set RAY_SCHEDULER_EVENTS=0. [2024-11-29T12:15:48.430+0000] {triggers.py:112} INFO - �[36m(autoscaler +5s)�[0m Adding 1 node(s) of type small-group. [2024-11-29T12:15:49.448+0000] {triggers.py:113} INFO - ::endgroup:: [2024-11-29T12:15:49.448+0000] {triggers.py:144} INFO - ::endgroup:: [2024-11-29T12:15:49.448+0000] {triggers.py:145} INFO - ::group:: Trigger 2/2: Job reached a terminal state [2024-11-29T12:15:49.448+0000] {triggers.py:146} INFO - Status of completed job raysubmit_paxAkyLiKxEHPmwG is: SUCCEEDED (...) ``` Child 2 DAG: <img width="1624" alt="Screenshot 2024-11-29 at 12 17 20" src="https://github.com/user-attachments/assets/5f0320a2-3bce-49a9-8580-a584d1f894dc"> Kubernetes RayClusters spun: <img width="758" alt="Screenshot 2024-11-29 at 12 15 37" src="https://github.com/user-attachments/assets/aabcce4a-ce4a-47db-b9cf-e87cf68f2316"> **Limitations** The example DAGs are not currently being executed in the CI, but there is a dedicated ticket for this work: #95 **References** This PR had inspiration from: #67
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The provider test suite was not being entirely run in the CI. Although there are four DAGs in the Ray provider example dags, the CI was only running one of them:
astro-provider-ray/tests/test_dag_example.py
Lines 63 to 64 in b6fe51b
Additionally, the one DAG that was already being run in the CI currently takes approximately 18m 45s since it relies on spinning up a Kubernetes cluster using GKE in Google Cloud:
The current PR enables us to run all the example DAGs against a Kind cluster spun up in the CI, giving us more confidence that our changes are backwards compatible.