You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In OpenSearch, Flint indices are stored in a single index, which may exceed the maximum document limit of Integer.MAX_VALUE. This is particularly problematic for Flint covering indexes, which can easily exceed this limit due to less aggregation compared to Flint skipping indexes and materialized views. In a test with the http_logs dataset, a single OpenSearch index was only able to hold approximately 133GB of data, potentially leading to data loss or index failures.
How can one reproduce the bug?
Create covering index from a large dataset.
What is the expected behavior?
The expected behavior would be that Flint indices can handle larger datasets without exceeding the maximum document count of a single OpenSearch index, or that a mechanism exists to split the data across multiple indices to avoid hitting this limit.
The text was updated successfully, but these errors were encountered:
What is the bug?
In OpenSearch, Flint indices are stored in a single index, which may exceed the maximum document limit of
Integer.MAX_VALUE
. This is particularly problematic for Flint covering indexes, which can easily exceed this limit due to less aggregation compared to Flint skipping indexes and materialized views. In a test with thehttp_logs
dataset, a single OpenSearch index was only able to hold approximately 133GB of data, potentially leading to data loss or index failures.How can one reproduce the bug?
Create covering index from a large dataset.
What is the expected behavior?
The expected behavior would be that Flint indices can handle larger datasets without exceeding the maximum document count of a single OpenSearch index, or that a mechanism exists to split the data across multiple indices to avoid hitting this limit.
The text was updated successfully, but these errors were encountered: