You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, any batch or streaming job on Spark table incurs expensive S3 listing to generate the input file list. Hive table has its own partition information in catalog but requires MSCK refresh this information manually and on a regular basis.
What solution would you like?
Actually only the S3 listing in skipping index is inevitable. Any direct query or streaming refreshing in covering index and materialized view doesn't need to do this again (unless strong consistency with latest source file is required).
In this case, the source file list seen so far can be found in file path column in Flint skipping index. The challenge is just to figure out if we can reconstruct FileStatus for Spark.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem?
Currently, any batch or streaming job on Spark table incurs expensive S3 listing to generate the input file list. Hive table has its own partition information in catalog but requires MSCK refresh this information manually and on a regular basis.
What solution would you like?
Actually only the S3 listing in skipping index is inevitable. Any direct query or streaming refreshing in covering index and materialized view doesn't need to do this again (unless strong consistency with latest source file is required).
In this case, the source file list seen so far can be found in file path column in Flint skipping index. The challenge is just to figure out if we can reconstruct
FileStatus
for Spark.The text was updated successfully, but these errors were encountered: