You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently there are only limited support for push down optimization in Flint data source. For example, value set column in skipping index is actually array and should use ARRAY_CONTAINS in filtering condition. Because of no push down support, index query has to use = instead.
What solution would you like?
Support push down for more operators in filtering condition
[TBD] Support array field by field metadata
The text was updated successfully, but these errors were encountered:
This requirement is part of integrating OpenSearch as a data source in Spark, which includes:
OpenSearch data types
OpenSearch scalar and aggregate functions
OpenSearch DSL pushdown capabilities
One approach is to leverage the OpenSearch SQL plugin by integrating OpenSearch as a JDBC connector via the OpenSearch SQL JDBC driver. If this proves to be a viable long-term solution, future enhancements, such as high performance communication via Apache Arrow, can be encapsulated and managed within the JDBC driver, ensuring a centralized and efficient implementation.
Is your feature request related to a problem?
Currently there are only limited support for push down optimization in Flint data source. For example, value set column in skipping index is actually array and should use
ARRAY_CONTAINS
in filtering condition. Because of no push down support, index query has to use = instead.What solution would you like?
The text was updated successfully, but these errors were encountered: