You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when creating skipping index with value_set column, Flint throw data too large exception.
[], URI [/_bulk?refresh=wait_for&timeout=1m], status line [HTTP/1.1 413 Request Entity Too Large]
{"Message":"Request size exceeded 104857600 bytes"}];
at org.opensearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:2208)
at org.opensearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1924)
at org.opensearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1877)
at org.opensearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1845)
at org.opensearch.client.RestHighLevelClient.bulk(RestHighLevelClient.java:364)
at org.opensearch.flint.core.storage.OpenSearchWriter.flush(OpenSearchWriter.java:61)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator.flush(WriterBasedJsonGenerator.java:967)
at org.apache.spark.sql.flint.json.FlintJacksonGenerator.flush(FlintJacksonGenerator.scala:258)
at org.apache.spark.sql.flint.FlintPartitionWriter.commit(FlintPartitionWriter.scala:70)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$1(WriteToDataSourceV2Exec.scala:453)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1550)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:480)
at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:381)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:138)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1516)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Suppressed: ParsingException[Failed to parse object: expecting field with name [error] but found [Message]]
at org.opensearch.common.xcontent.XContentParserUtils.ensureFieldName(XContentParserUtils.java:63)
at org.opensearch.OpenSearchException.failureFromXContent(OpenSearchException.java:642)
at org.opensearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:199)
at org.opensearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:2228)
at org.opensearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:2205)
... 20 more
Caused by: org.opensearch.client.ResponseException: method [POST], host [], URI [/_bulk?refresh=wait_for&timeout=1m], status line [HTTP/1.1 413 Request Entity Too Large]
{"Message":"Request size exceeded 104857600 bytes"}
at org.opensearch.client.RestClient.convertResponse(RestClient.java:375)
at org.opensearch.client.RestClient.performRequest(RestClient.java:345)
at org.opensearch.client.RestClient.performRequest(RestClient.java:320)
at org.opensearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1911)
... 19 more
The text was updated successfully, but these errors were encountered:
penghuo
changed the title
[BUG] Value_Set index too large, Flint Request size exceeded 104857600 bytes
[BUG] Value_Set doc too large, Flint Request size exceeded 104857600 bytes
Dec 18, 2023
What is the bug?
The text was updated successfully, but these errors were encountered: