Feature:
-
Improved the performance of
aql.load_file
by supporting database-specific (native) load methods. This is now the default behaviour. Previously, the Astro SDK Python would always use Pandas to load files to SQL databases which passed the data to worker node which slowed the performance. #557, #481Introduced new arguments to
aql.load_file
:use_native_support
for data transfer if available on the destination (defaults touse_native_support=True
)native_support_kwargs
is a keyword argument to be used by method involved in native support flow.enable_native_fallback
can be used to fall back to default transfer(defaults toenable_native_fallback=True
).
Now, there are three modes:
Native
: Default, uses Bigquery Load Job in the case of BigQuery and Snowflake COPY INTO using external stage in the case of Snowflake.Pandas
: This is how datasets were previously loaded. To enable this mode, use the argumentuse_native_support=False
inaql.load_file
.Hybrid
: This attempts to use the native strategy to load a file to the database and if native strategy(i) fails , fallback to Pandas (ii) with relevant log warnings.
-
Allow users to specify the table schema (column types) in which a file is being loaded by using
table.columns
. If this table attribute is not set, the Astro SDK still tries to infer the schema by using Pandas (which is previous behaviour).#532 -
Implement fallback mechanism in case native support fails to default option with log warning for problem with native support. #557
-
Add Example DAG for Dynamic Map Task with Astro-SDK. #377,airflow-2.3.0
Community:
- Allow running tests on PRs from forks + label #179
Breaking Change:
- The
aql.dataframe
argumentidentifiers_as_lower
(which wasboolean
, with default set toFalse
) was replaced by the argumentcolumns_names_capitalization
(string
within possible values["upper", "lower", "original"]
, default islower
).#564 - The
aql.load_file
before would change the capitalization of all column titles to be uppercase, by default, now it makes them lowercase, by default. The old behaviour can be achieved by using the argumentcolumns_names_capitalization="upper"
. #564 aql.load_file
attempts to load files to BigQuery and Snowflake by using native methods, which may have pre-requirements to work. To disable this mode, use the argumentuse_native_support=False
inaql.load_file
. #557, #481aql.dataframe
will raise an exception if the default Airflow XCom backend is being used. To solve this, either use an external XCom backend, such as S3 or GCS or set the configurationAIRFLOW__ASTRO_SDK__DATAFRAME_ALLOW_UNSAFE_STORAGE=True
. #444- Change the declaration for the default Astro SDK temporary schema from using
AIRFLOW__ASTRO__SQL_SCHEMA
toAIRFLOW__ASTRO_SDK__SQL_SCHEMA
#503 - Renamed
aql.truncate
toaql.drop_table
#554
Bug fix:
- Fix missing airflow's task terminal states to
CleanupOperator
#525 - Allow chaining
aql.drop_table
(previouslytruncate
) tasks using the Task Flow API syntax. #554, #515
Enhancement:
- Improved the performance of
aql.load_file
for files from AWS S3 to Google BigQuery up to 94%. #429, #568 - Improved the performance of
aql.load_file
for files from Google Cloud Storage to Google BigQuery up to 93%. #429, #562 - Improved the performance of
aql.load_file
for files from AWS S3/Google Cloud Storage to Snowflake up to 76%. #430, #544 - Improved the performance of
aql.load_file
for files from GCS to Postgres in K8s up to 93%. #428, #531 - Fix sphinx docs sidebar #472
- Get configurations via Airflow Configuration manager. #503
- Add CI job to check for dead links #526
Feature:
Internals:
Enhancement:
- Fail LoadFileOperator operator when input_file does not exist #467
- Create scripts to launch benchmark testing to Google cloud #432
- Bump Google Provider for google extra #294
Feature:
Breaking Change:
aql.merge
interface changed. Argumentmerge_table
changed totarget_table
,target_columns
andmerge_column
combined tocolumn
argument,merge_keys
is changed totarget_conflict_columns
,conflict_strategy
is changed toif_conflicts
. More details can be found at 422, #466
Enhancement:
- Document (new) load_file benchmark datasets #449
- Made improvement to benchmark scripts and configurations #458, #434, #461, #460, #437, #462
- Performance evaluation for loading datasets with Astro Python SDK 0.9.2 into BigQuery #437
Bug fix:
- Change export_file to return File object #454.
Bug fix:
- Table unable to have Airflow templated names #413
Enhancements:
- Introduction of the user-facing
Table
,Metadata
andFile
classes
Breaking changes:
- The operator
save_file
becameexport_file
- The tasks
load_file
,export_file
(previouslysave_file
) andrun_raw_sql
should be used with useTable
,Metadata
andFile
instances - The decorators
dataframe
,run_raw_sql
andtransform
should be used withTable
andMetadata
instances - The operators
aggregate_check
,boolean_check
,render
andstats_check
were temporarily removed - The class
TempTable
was removed. It is possible to declare temporary tables by usingTable(temp=True)
. All the temporary tables names are prefixed with_tmp_
. If the user decides to name aTable
, it is no longer temporary, unless the user enforces it to be. - The only mandatory property of a
Table
instance isconn_id
. If no metadata is given, the library will try to extract schema and other information from the connection object. If it is missing, it will default to theAIRFLOW__ASTRO__SQL_SCHEMA
environment variable.
Internals:
- Major refactor introducing
Database
,File
,FileType
andFileLocation
concepts.
Enhancements:
- Add support for Airflow 2.3 #367.
Breaking change:
- We have renamed the artifacts we released to
astro-sdk-python
fromastro-projects
.0.8.4
is the last version for which we have published bothastro-sdk-python
andastro-projects
.
Bug fix:
- Do not attempt to create a schema if it already exists #329.
Bug fix:
- Support dataframes from different databases in dataframe operator #325
Enhancements:
- Add integration testcase for
SqlDecoratedOperator
to test execution of Raw SQL #316
Bug fix:
- Snowflake transform without
input_table
#319
Feature:
*load_file
support for nested NDJSON files #257
Breaking change:
aql.dataframe
switches the capitalization to lowercase by default. This behaviour can be changed by usingidentifiers_as_lower
#154
Documentation:
- Fix commands in README.md #242
- Add scripts to auto-generate Sphinx documentation
Enhancements:
- Improve type hints coverage
- Improve Amazon S3 example DAG, so it does not rely on pre-populated data #293
- Add example DAG to load/export from BigQuery #265
- Fix usages of mutable default args #267
- Enable DeepSource validation #299
- Improve code quality and coverage
Bug fixes:
- Support
gcpbigquery
connections #294 - Support
params
argument inaql.render
to override SQL Jinja template values #254 - Fix
aql.dataframe
when table arg is absent #259
Others:
- Refactor integration tests, so they can run across all supported databases #229, #234, #235, #236, #206, #217
Feature:
load_file
to a Pandas dataframe, without SQL database dependencies #77
Documentation:
- Simplify README #101
- Add Release Guidelines #160
- Add Code of Conduct #101
- Add Contribution Guidelines #101
Enhancements:
- Add SQLite example #149
- Allow customization of
task_id
when usingdataframe
#126 - Use standard AWS environment variables, as opposed to
AIRFLOW__ASTRO__CONN_AWS_DEFAULT
#175
Bug fixes:
- Fix
merge
XComArg
support #183 - Fixes to
load_file
: - Fixes to
render
: - Fix
transform
, so it works with SQLite #159
Others:
Features:
- Support SQLite #86
- Support users who can't create schemas #121
- Ability to install optional dependencies (amazon, google, snowflake) #82
Enhancements:
- Change
render
so it creates a DAG as opposed to a TaskGroup #143 - Allow users to specify a custom version of
snowflake_sqlalchemy
#127
Bug fixes:
Others: