Skip to content
This repository has been archived by the owner on Aug 31, 2021. It is now read-only.

Job fails on Scaling Throttle from DDB #60

Open
maevesechrist opened this issue Apr 24, 2020 · 0 comments
Open

Job fails on Scaling Throttle from DDB #60

maevesechrist opened this issue Apr 24, 2020 · 0 comments
Labels
bug Something isn't working

Comments

@maevesechrist
Copy link

Running a job against an on-demand table with a GSI fails with this error:

Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1192 in stage 1.0 failed 4 times, most recent failure: Lost task 1192.3 in stage 1.0 (TID 3214, ip-10-0-165-179.us-west-2.compute.internal, executor 176): com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is automatically scaling your index so please try again shortly. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ThrottlingException;

Seems like this shouldn't fail the job but cause it to retry.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Development

No branches or pull requests

2 participants