-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update airflow resources and split out dbt tests and alerts #433
Conversation
airflow_variables_dev.json
Outdated
}, | ||
"state": { | ||
"stellar-etl": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added new stellar-etl
and dbt
resources. This will make it easy to adjust resources without adjusting resources for every dag/task
dags/dbt_singular_tests_dag.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically this could have been added to the existing dbt eho/state DAG. I opted to separate it out because it is purely for running anomaly tests/other tests which felt different enough to warrant a standalone DAG
|
||
# DAG task graph | ||
start_tests >> singular_tests >> singular_tests_elementary_alerts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good. But curious why were we using empty operator?
Also, different DAG is okay, but we can create independent tasks in same DAG as well, correct?
Explained in #433 (comment)
elementary_alerts
singular_tests_elementary_alerts
"cpu": "0.5", | ||
"ephemeral-storage": "1Gi", | ||
"memory": "5Gi" | ||
"memory": "1Gi" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my own knowledge, are the historical resource usage metrics tracked somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe it is. In theory you could find it in GCP logs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Left some q inline for my understanding
dags/dbt_data_quality_alerts_dag.py
Outdated
@@ -18,7 +18,7 @@ | |||
default_args=get_default_dag_args(), | |||
start_date=datetime(2024, 6, 25, 0, 0), | |||
description="This DAG runs dbt tests and Elementary alerts at a half-hourly cadence", | |||
schedule="*/15,*/45 * * * *", # Runs every 15th minute and every 45th minute | |||
schedule="15,45 * * * *", # Runs every 15th minute and every 45th minute |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason we're running at the 15th and 45th minutes instead of */30 ****
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. So the other dbt DAGs run on */30
with task run times around 10 mins. If we were to run the quality alerts at the same time we would have a slightly larger gap in time before we are notified. Running at a 15 min offset to those runs hopefully alerts faster
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively I don't see the harm in running every 15 mins like it has been lol
Edit: Updated to just run every 15 mins
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're no longer separating resources out between dbt, default and stellaretl, is it worth us storing configs beyond the default
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's worth it. I was going to delete all the resources and just make everything run with the default settings but I have a theory that dbt and stellar-etl both have different resource requirements. So in the case we do need to bump up/down resources for one service we wouldn't necessarily OOM or over-allocate the other service
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👌 sounds good to me. Your theory sounds plausible
PR Checklist
PR Structure
otherwise).
Thoroughness
What
Reducing airflow resources to reduce costs and splitting out tests from the elementary alert DAG
Why
Known limitations
N/A