You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi
I'm trying to write Avro message to parquet on GCS. These parquet should be query by big query.
First we didn't notice any problem using Spark meaning the same file are read with Spark without any problem all is working perfect.
Now using Big query bring and error like this :
Error while reading table: , error message: Read less values than expected: Actual: 29333, Expected: 33827. Row group: 0, Column: , File:
After investigation using parquet-tools I figured out that in parquet there is metadata regarding number total of unique values for each columns eg from parquet-tools
page 0: DLE:BIT_PACKED RLE:BIT_PACKED [more]... CRC:[PAGE CORRUPT] VC:547
So the VC value indicate that the total number of unique value in the file is 547.
Now when make a spark SQL like SELECT DISTINCT COUNT(column) FROM ... I get 421 mean this number in the metadata is incorrect.
So what is not a problem for Spark to read is a blocking problem for Big data because it relies on these values and found it incorrect.
Do you have an idea of what can cause this ?
Is there something that can be configured to write parquet ?
Thanks
The text was updated successfully, but these errors were encountered:
Hi
I'm trying to write Avro message to parquet on GCS. These parquet should be query by big query.
First we didn't notice any problem using Spark meaning the same file are read with Spark without any problem all is working perfect.
Now using Big query bring and error like this :
Error while reading table: , error message: Read less values than expected: Actual: 29333, Expected: 33827. Row group: 0, Column: , File:
After investigation using parquet-tools I figured out that in parquet there is metadata regarding number total of unique values for each columns eg from parquet-tools
page 0: DLE:BIT_PACKED RLE:BIT_PACKED [more]... CRC:[PAGE CORRUPT] VC:547
So the VC value indicate that the total number of unique value in the file is 547.
Now when make a spark SQL like SELECT DISTINCT COUNT(column) FROM ... I get 421 mean this number in the metadata is incorrect.
So what is not a problem for Spark to read is a blocking problem for Big data because it relies on these values and found it incorrect.
Do you have an idea of what can cause this ?
Is there something that can be configured to write parquet ?
Thanks
The text was updated successfully, but these errors were encountered: