Skip to content
This repository has been archived by the owner on Aug 31, 2021. It is now read-only.

Exceeding throughput when table has GSIs #61

Open
maevesechrist opened this issue Apr 27, 2020 · 5 comments
Open

Exceeding throughput when table has GSIs #61

maevesechrist opened this issue Apr 27, 2020 · 5 comments

Comments

@maevesechrist
Copy link

I have a table with two GSIs. The table and the GSIs all have a read and write throughput of 20,000. When not specifying the throughput, writing to the table results in the following error:

Caused by: com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API
@cosmincatalin
Copy link
Contributor

Hi, isn’t this a duplicate of #60?

@maevesechrist
Copy link
Author

This is using a provisioned throughput table. #60 is using an on-demand table.

@cosmincatalin
Copy link
Contributor

👍

@jacobfi
Copy link
Contributor

jacobfi commented Apr 28, 2020

Hi
Could you please paste a longer stack trace?
Thank you

@maevesechrist
Copy link
Author

Caused by: com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: 4HIJTQE5F2G4L7FIKG9EV6FI97VV4KQNSO5AEMVJF66Q9ASUAAJG)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:4230)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:4197)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeBatchWriteItem(AmazonDynamoDBClient.java:693)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.batchWriteItem(AmazonDynamoDBClient.java:660)
	at com.amazonaws.services.dynamodbv2.document.internal.BatchWriteItemImpl.doBatchWriteItem(BatchWriteItemImpl.java:113)
	at com.amazonaws.services.dynamodbv2.document.internal.BatchWriteItemImpl.batchWriteItemUnprocessed(BatchWriteItemImpl.java:65)
	at com.amazonaws.services.dynamodbv2.document.DynamoDB.batchWriteItemUnprocessed(DynamoDB.java:189)
	at com.audienceproject.spark.dynamodb.connector.TableConnector.handleBatchWriteResponse(TableConnector.scala:185)
	at com.audienceproject.spark.dynamodb.connector.TableConnector.putItems(TableConnector.scala:143)
	at com.audienceproject.spark.dynamodb.datasource.DynamoBatchWriter.flush(DynamoBatchWriter.scala:56)
	at com.audienceproject.spark.dynamodb.datasource.DynamoBatchWriter.write(DynamoBatchWriter.scala:43)
	at com.audienceproject.spark.dynamodb.datasource.DynamoBatchWriter.write(DynamoBatchWriter.scala:31)
	at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:118)
	at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:116)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
	at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:146)
	at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:67)
	at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:66)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:121)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

3 participants