-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Write the 2nd Lamda function that matches the data through Exp_ID #35
Comments
Problems and current solutions |
Setting up the second lamda function attaching the metadata
Step 1: Create an S3 Bucket
Step 2: Create a DynamoDB Table
Step 3: Create an IAM Role for Lambda
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<s3-bucket-name>/*",
"arn:aws:s3:::<s3-bucket-name>"
]
},
{
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "arn:aws:dynamodb:eu-central-1:058264498638:table/<db-table-name>"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:ListTables",
"dynamodb:Scan",
"dynamodb:UpdateItem"
],
"Resource": "arn:aws:dynamodb:eu-central-1:058264498638:table/<db-table-name>/*"
}
]
}
Step 4: Create the Lambda Function
Step 5: Set up the S3 Trigger
Step 6: Configure the S3 Bucket Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::058264498638:role/lambda_import_csv"
},
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::s3-metadata-obelix",
"arn:aws:s3:::s3-metadata-obelix/*"
]
}
]
}
Step 7: Test the Function
{
"Records": [
{
"s3": {
"bucket": {
"name": "s3-metadata-obelix"
},
"object": {
"key": "metadata_all.csv"
}
}
}
]
} S3-to-dynamodb-sns-metadata-f37bb411-26a1-4833-909c-ee93bf358218.zip |
To test querying from the DynamoDB table from your own environment, you need boto3. with that you can query according to any attribute or key. I've attached an example py script, ObelixNestedQuery.zip, to do simple queries. Make sure to write your own dynamodb Table name for the table variable and have the right authentications to be able to query from your IDE (for Vs code, you can use AWS CLI extension for this among others). |
This function should get the data in a metadata format, matches it with the Exp_ID that is manually entered in the metadata and appends all the new information under a dictionary called "meta". This way we can separate dat and metadata when querying and this is especially helpful when we dont know the names of the attributes that will be used in the queries.
- [x] csv read test from S3
- [ ] write test to dynamodb
The text was updated successfully, but these errors were encountered: