In this project, we create a streaming ETL job in AWS Glue to integrate Iceberg with a streaming use case and create an in-place updatable data lake on Amazon S3.
After ingested to Amazon S3, you can query the data with Amazon Athena.
This project can be deployed with AWS CDK Python.
The cdk.json
file tells the CDK Toolkit how to execute your app.
This project is set up like a standard Python project. The initialization
process also creates a virtualenv within this project, stored under the .venv
directory. To create the virtualenv it assumes that there is a python3
(or python
for Windows) executable in your path with access to the venv
package. If for any reason the automatic creation of the virtualenv fails,
you can create the virtualenv manually.
To manually create a virtualenv on MacOS and Linux:
$ python3 -m venv .venv
After the init process completes and the virtualenv is created, you can use the following step to activate your virtualenv.
$ source .venv/bin/activate
If you are a Windows platform, you would activate the virtualenv like this:
% .venv\Scripts\activate.bat
Once the virtualenv is activated, you can install the required dependencies.
(.venv) $ pip install -r requirements.txt
In case of AWS Glue 3.0
, before synthesizing the CloudFormation, you first set up Apache Iceberg connector for AWS Glue to use Apache Iceber with AWS Glue jobs. (For more information, see References (2))
Then you should set approperly the cdk context configuration file, cdk.context.json
.
For example:
{ "kinesis_stream_name": "iceberg-demo-stream", "glue_assets_s3_bucket_name": "aws-glue-assets-123456789012-atq4q5u", "glue_job_script_file_name": "spark_iceberg_writes_with_dataframe.py", "glue_job_name": "streaming_data_from_kds_into_iceberg_table", "glue_job_input_arguments": { "--catalog": "job_catalog", "--database_name": "iceberg_demo_db", "--table_name": "iceberg_demo_table", "--primary_key": "name", "--kinesis_table_name": "iceberg_demo_kinesis_stream_table", "--starting_position_of_kinesis_iterator": "LATEST", "--iceberg_s3_path": "s3://glue-iceberg-demo-atq4q5u/iceberg_demo_db", "--lock_table_name": "iceberg_lock", "--aws_region": "us-east-1", "--window_size": "100 seconds", "--extra-jars": "s3://aws-glue-assets-123456789012-atq4q5u/extra-jars/aws-sdk-java-2.17.224.jar", "--user-jars-first": "true" }, "glue_connections_name": "iceberg-connection", "glue_kinesis_table": { "database_name": "iceberg_demo_db", "table_name": "iceberg_demo_kinesis_stream_table", "columns": [ { "name": "name", "type": "string" }, { "name": "age", "type": "int" }, { "name": "m_time", "type": "string" } ] } }
ℹ️ --primary_key
option should be set by Iceberg table's primary column name.
At this point you can now synthesize the CloudFormation template for this code.
(.venv) $ export CDK_DEFAULT_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) (.venv) $ export CDK_DEFAULT_REGION=$(aws configure get region) (.venv) $ cdk synth --all
To add additional dependencies, for example other CDK libraries, just add
them to your setup.py
file and rerun the pip install -r requirements.txt
command.
-
Set up Apache Iceberg connector for AWS Glue to use Apache Iceberg with AWS Glue jobs.
-
Create a S3 bucket for Apache Iceberg table
(.venv) $ cdk deploy IcebergS3Path
-
Create a Kinesis data stream
(.venv) $ cdk deploy KinesisStreamAsGlueStreamingJobDataSource
-
Define a schema for the streaming data
(.venv) $ cdk deploy GlueSchemaOnKinesisStream
Running
cdk deploy GlueSchemaOnKinesisStream
command is like that we create a schema manually using the AWS Glue Data Catalog as the following steps:(1) On the AWS Glue console, choose Data Catalog.
(2) Choose Databases, and click Add database.
(3) Create a database with the nameiceberg_demo_db
.
(4) On the Data Catalog menu, Choose Tables, and click Add Table.
(5) For the table name, entericeberg_demo_kinesis_stream_table
.
(6) Selecticeberg_demo_db
as a database.
(7) Choose Kinesis as the type of source.
(8) Enter the name of the stream.
(9) For the classification, choose JSON.
(10) Define the schema according to the following table.Column name Data type Example name string "Ricky" age int 23 m_time string "2023-06-13 07:24:26" (11) Choose Finish
-
Upload AWS SDK for Java 2.x jar file into S3
(.venv) $ wget https://repo1.maven.org/maven2/software/amazon/awssdk/aws-sdk-java/2.17.224/aws-sdk-java-2.17.224.jar (.venv) $ aws s3 cp aws-sdk-java-2.17.224.jar s3://aws-glue-assets-123456789012-atq4q5u/extra-jars/aws-sdk-java-2.17.224.jar
A Glue Streaming Job might fail because of the following error:
py4j.protocol.Py4JJavaError: An error occurred while calling o135.start. : java.lang.NoSuchMethodError: software.amazon.awssdk.utils.SystemSetting.getStringValueFromEnvironmentVariable(Ljava/lang/String;)Ljava/util/Optional
We can work around the problem by starting the Glue Job with the additional parameters:
--extra-jars s3://path/to/aws-sdk-for-java-v2.jar --user-jars-first true
In order to do this, we might need to upload AWS SDK for Java 2.x jar file into S3.
-
Create Glue Streaming Job
-
(step 1) Select one of Glue Job Scripts and upload into S3
List of Glue Job Scirpts
File name Spark Writes spark_iceberg_writes_with_dataframe.py DataFrame append spark_iceberg_writes_with_sql_insert_overwrite.py SQL insert overwrite spark_iceberg_writes_with_sql_merge_into.py SQL merge into (.venv) $ ls src/main/python/ spark_iceberg_writes_with_dataframe.py spark_iceberg_writes_with_sql_insert_overwrite.py spark_iceberg_writes_with_sql_merge_into.py (.venv) $ aws s3 mb s3://aws-glue-assets-123456789012-atq4q5u --region us-east-1 (.venv) $ aws s3 cp src/main/python/spark_iceberg_writes_with_dataframe.py s3://aws-glue-assets-123456789012-atq4q5u/scripts/
-
(step 2) Provision the Glue Streaming Job
(.venv) $ cdk deploy GlueStreamingSinkToIcebergJobRole \ GrantLFPermissionsOnGlueJobRole \ GlueStreamingSinkToIceberg
-
-
Make sure the glue job to access the Kinesis Data Streams table in the Glue Catalog database, otherwise grant the glue job to permissions
Wec can get permissions by running the following command:
(.venv) $ aws lakeformation list-permissions | jq -r '.PrincipalResourcePermissions[] | select(.Principal.DataLakePrincipalIdentifier | endswith(":role/GlueStreamingJobRole-Iceberg"))'
If not found, we need manually to grant the glue job to required permissions by running the following command:
(.venv) $ aws lakeformation grant-permissions \ --principal DataLakePrincipalIdentifier=arn:aws:iam::{account-id}:role/GlueStreamingJobRole-Iceberg \ --permissions SELECT DESCRIBE ALTER INSERT DELETE \ --resource '{ "Table": {"DatabaseName": "iceberg_demo_db", "TableWildcard": {}} }'
-
Create a table with partitioned data in Amazon Athena
Go to Athena on the AWS Management console.
-
(step 1) Create a database
In order to create a new database called
iceberg_demo_db
, enter the following statement in the Athena query editor and click the Run button to execute the query.CREATE DATABASE IF NOT EXISTS iceberg_demo_db
-
(step 2) Create a table
Copy the following query into the Athena query editor, replace the
xxxxxxx
in the last line underLOCATION
with the string of your S3 bucket, and execute the query to create a new table.CREATE TABLE iceberg_demo_db.iceberg_demo_table ( name string, age int, m_time timestamp ) PARTITIONED BY (`name`) LOCATION 's3://glue-iceberg-demo-atq4q5u/iceberg_demo_db/iceberg_demo_table' TBLPROPERTIES ( 'table_type'='iceberg' );
If the query is successful, a table named
iceberg_demo_table
is created and displayed on the left panel under the Tables section.If you get an error, check if (a) you have updated the
LOCATION
to the correct S3 bucket name, (b) you have mydatabase selected under the Database dropdown, and (c) you haveAwsDataCatalog
selected as the Data source.ℹ️ If you fail to create the table, give Athena users access permissions on
iceberg_demo_db
through AWS Lake Formation, or you can grant anyone using Athena to accessiceberg_demo_db
by running the following command:(.venv) $ aws lakeformation grant-permissions \ --principal DataLakePrincipalIdentifier=arn:aws:iam::{account-id}:user/example-user-id \ --permissions CREATE_TABLE DESCRIBE ALTER DROP \ --resource '{ "Database": { "Name": "iceberg_demo_db" } }' (.venv) $ aws lakeformation grant-permissions \ --principal DataLakePrincipalIdentifier=arn:aws:iam::{account-id}:user/example-user-id \ --permissions SELECT DESCRIBE ALTER INSERT DELETE DROP \ --resource '{ "Table": {"DatabaseName": "iceberg_demo_db", "TableWildcard": {}} }'
-
-
Run glue job to load data from Kinesis Data Streams into S3
(.venv) $ aws glue start-job-run --job-name streaming_data_from_kds_into_iceberg_table
-
Generate streaming data
We can synthetically generate data in JSON format using a simple Python application.
(.venv) $ python src/utils/gen_fake_kinesis_stream_data.py \ --region-name us-east-1 \ --stream-name your-stream-name \ --console \ --max-count 10
Synthentic Data Example order by
name
andm_time
{"name": "Arica", "age": 48, "m_time": "2023-04-11 19:13:21"} {"name": "Arica", "age": 32, "m_time": "2023-10-20 17:24:17"} {"name": "Arica", "age": 45, "m_time": "2023-12-26 01:20:49"} {"name": "Fernando", "age": 16, "m_time": "2023-05-22 00:13:55"} {"name": "Gonzalo", "age": 37, "m_time": "2023-01-11 06:18:26"} {"name": "Gonzalo", "age": 60, "m_time": "2023-01-25 16:54:26"} {"name": "Micheal", "age": 45, "m_time": "2023-04-07 06:18:17"} {"name": "Micheal", "age": 44, "m_time": "2023-12-14 09:02:57"} {"name": "Takisha", "age": 48, "m_time": "2023-12-20 16:44:13"} {"name": "Takisha", "age": 24, "m_time": "2023-12-30 12:38:23"}
Spark Writes using
DataFrame append
insert all records into the Iceberg table.{"name": "Arica", "age": 48, "m_time": "2023-04-11 19:13:21"} {"name": "Arica", "age": 32, "m_time": "2023-10-20 17:24:17"} {"name": "Arica", "age": 45, "m_time": "2023-12-26 01:20:49"} {"name": "Fernando", "age": 16, "m_time": "2023-05-22 00:13:55"} {"name": "Gonzalo", "age": 37, "m_time": "2023-01-11 06:18:26"} {"name": "Gonzalo", "age": 60, "m_time": "2023-01-25 16:54:26"} {"name": "Micheal", "age": 45, "m_time": "2023-04-07 06:18:17"} {"name": "Micheal", "age": 44, "m_time": "2023-12-14 09:02:57"} {"name": "Takisha", "age": 48, "m_time": "2023-12-20 16:44:13"} {"name": "Takisha", "age": 24, "m_time": "2023-12-30 12:38:23"}
Spark Writes using
SQL insert overwrite
orSQL merge into
insert the last updated records into the Iceberg table.{"name": "Arica", "age": 45, "m_time": "2023-12-26 01:20:49"} {"name": "Fernando", "age": 16, "m_time": "2023-05-22 00:13:55"} {"name": "Gonzalo", "age": 60, "m_time": "2023-01-25 16:54:26"} {"name": "Micheal", "age": 44, "m_time": "2023-12-14 09:02:57"} {"name": "Takisha", "age": 24, "m_time": "2023-12-30 12:38:23"}
-
Check streaming data in S3
After
3~5
minutes, you can see that the streaming data have been delivered from Kinesis Data Streams to S3. -
Run test query
Enter the following SQL statement and execute the query.
SELECT COUNT(*) FROM iceberg_demo_db.iceberg_demo_table;
-
Stop the glue job by replacing the job name in below command.
(.venv) $ JOB_RUN_IDS=$(aws glue get-job-runs \ --job-name streaming_data_from_kds_into_iceberg_table | jq -r '.JobRuns[] | select(.JobRunState=="RUNNING") | .Id' \ | xargs) (.venv) $ aws glue batch-stop-job-run \ --job-name streaming_data_from_kds_into_iceberg_table \ --job-run-ids $JOB_RUN_IDS
-
Delete the CloudFormation stack by running the below command.
(.venv) $ cdk destroy --all
cdk ls
list all stacks in the appcdk synth
emits the synthesized CloudFormation templatecdk deploy
deploy this stack to your default AWS account/regioncdk diff
compare deployed stack with current statecdk docs
open CDK documentation
- (1) AWS Glue versions: The AWS Glue version determines the versions of Apache Spark and Python that AWS Glue supports.
- (2) Use the AWS Glue connector to read and write Apache Iceberg tables with ACID transactions and perform time travel (2022-06-21)
- (3) Streaming Data into Apache Iceberg Tables Using AWS Kinesis and AWS Glue (2022-09-26)
- (4) Amazon Athena Using Iceberg tables
- (5) Streaming ETL jobs in AWS Glue
- (6) AWS Glue job parameters
- (7) Crafting serverless streaming ETL jobs with AWS Glue
- (8) Apache Iceberg - Spark Writes with SQL (v0.14.0)
- (9) Apache Iceberg - Spark Structured Streaming (v0.14.0)
- (10) Apache Iceberg - Writing against partitioned table (v0.14.0)
- Iceberg supports append and complete output modes:
-
append
: appends the rows of every micro-batch to the table -
complete
: replaces the table contents every micro-batchIceberg requires the data to be sorted according to the partition spec per task (Spark partition) in prior to write against partitioned table.
Otherwise, you might encounter the following error:pyspark.sql.utils.AnalysisException: Complete output mode not supported when there are no streaming aggregations on streaming DataFrame/Datasets;
-
- Iceberg supports append and complete output modes:
- (10) Apache Iceberg - Maintenance for streaming tables (v0.14.0)
- (11) awsglue python package: The awsglue Python package contains the Python portion of the AWS Glue library. This library extends PySpark to support serverless ETL on AWS.
- (12) AWS Glue Notebook Samples - sample iPython notebook files which show you how to use open data dake formats; Apache Hudi, Delta Lake, and Apache Iceberg on AWS Glue Interactive Sessions and AWS Glue Studio Notebook.
- Granting database or table permissions error using AWS CDK
-
Error message:
AWS::LakeFormation::PrincipalPermissions | CfnPrincipalPermissions Resource handler returned message: "Resource does not exist or requester is not authorized to access requested permissions. (Service: LakeFormation, Status Code: 400, Request ID: f4d5e58b-29b6-4889-9666-7e38420c9035)" (RequestToken: 4a4bb1d6-b051-032f-dd12-5951d7b4d2a9, HandlerErrorCode: AccessDenied)
-
Solution:
The role assumed by cdk is not a data lake administrator. (e.g.,
cdk-hnb659fds-deploy-role-12345678912-us-east-1
)
So, deploying PrincipalPermissions meets the error such as:Resource does not exist or requester is not authorized to access requested permissions.
In order to solve the error, it is necessary to promote the cdk execution role to the data lake administrator.
For example, https://github.com/aws-samples/data-lake-as-code/blob/mainline/lib/stacks/datalake-stack.ts#L68 -
Reference:
https://github.com/aws-samples/data-lake-as-code - Data Lake as Code
-
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.