From 69a0c8121df99c4a4242314194626dd005fcb440 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Thu, 5 Oct 2023 23:32:39 +0000 Subject: [PATCH] Add documentation (#2229) Signed-off-by: Vamsi Manohar (cherry picked from commit 45da40f6bec9eb46056b557d1fc557265e5790fa) Signed-off-by: github-actions[bot] --- docs/user/interfaces/asyncqueryinterface.rst | 22 ++++++++++------ .../ppl/admin/connectors/s3glue_connector.rst | 26 +++++++++++++------ 2 files changed, 32 insertions(+), 16 deletions(-) diff --git a/docs/user/interfaces/asyncqueryinterface.rst b/docs/user/interfaces/asyncqueryinterface.rst index afcade2303..a9fc77264c 100644 --- a/docs/user/interfaces/asyncqueryinterface.rst +++ b/docs/user/interfaces/asyncqueryinterface.rst @@ -14,23 +14,29 @@ Async Query Interface Endpoints Introduction ============ -For supporting `S3Glue <../ppl/admin/connectors/s3glue_connector.rst>`_ and Cloudwatch datasources connectors, we have introduced a new execution engine on top of Spark. +For supporting `S3Glue <../ppl/admin/connectors/s3glue_connector.rst>`_ datasource connector, we have introduced a new execution engine on top of Spark. All the queries to be executed on spark execution engine can only be submitted via Async Query APIs. Below sections will list all the new APIs introduced. -Configuration required for Async Query APIs -====================================== -Currently, we only support AWS emr serverless as SPARK execution engine. The details of execution engine should be configured under -``plugins.query.executionengine.spark.config`` cluster setting. The value should be a stringified json comprising of ``applicationId``, ``executionRoleARN``,``region``. +Required Spark Execution Engine Config for Async Query APIs +=========================================================== +Currently, we only support AWS EMRServerless as SPARK execution engine. The details of execution engine should be configured under +``plugins.query.executionengine.spark.config`` in cluster settings. The value should be a stringified json comprising of ``applicationId``, ``executionRoleARN``,``region``, ``sparkSubmitParameter``. Sample Setting Value :: - plugins.query.executionengine.spark.config: '{"applicationId":"xxxxx", "executionRoleARN":"arn:aws:iam::***********:role/emr-job-execution-role","region":"eu-west-1"}' - - + plugins.query.executionengine.spark.config: + '{ "applicationId":"xxxxx", + "executionRoleARN":"arn:aws:iam::***********:role/emr-job-execution-role", + "region":"eu-west-1", + "sparkSubmitParameter": "--conf spark.dynamicAllocation.enabled=false" + }' If this setting is not configured during bootstrap, Async Query APIs will be disabled and it requires a cluster restart to enable them back again. We make use of default aws credentials chain to make calls to the emr serverless application and also make sure the default credentials have pass role permissions for emr-job-execution-role mentioned in the engine configuration. +* ``applicationId``, ``executionRoleARN`` and ``region`` are required parameters. +* ``sparkSubmitParameter`` is an optional parameter. It can take the form ``--conf A=1 --conf B=2 ...``. + Async Query Creation API ====================================== diff --git a/docs/user/ppl/admin/connectors/s3glue_connector.rst b/docs/user/ppl/admin/connectors/s3glue_connector.rst index ef27cf572a..9f5e1b4425 100644 --- a/docs/user/ppl/admin/connectors/s3glue_connector.rst +++ b/docs/user/ppl/admin/connectors/s3glue_connector.rst @@ -20,10 +20,11 @@ This page covers s3Glue datasource configuration and also how to query and s3Glu Required resources for s3 Glue Connector =================================== -* S3: This is where the data lies. -* Spark Execution Engine: Query Execution happens on spark. -* Glue Metadata store: Glue takes care of table metadata. -* Opensearch: Index for s3 data lies in opensearch and also acts as temporary buffer for query results. +* ``EMRServerless Spark Execution Engine Config Setting``: Since we execute s3Glue queries on top of spark execution engine, we require this configuration. + More details: `ExecutionEngine Config <../../../interfaces/asyncqueryinterface.rst#id2>`_ +* ``S3``: This is where the data lies. +* ``Glue`` Metadata store: Glue takes care of table metadata. +* ``Opensearch IndexStore``: Index for s3 data lies in opensearch and also acts as temporary buffer for query results. We currently only support emr-serverless as spark execution engine and Glue as metadata store. we will add more support in future. @@ -31,6 +32,7 @@ Glue Connector Properties in DataSource Configuration ======================================================== Glue Connector Properties. +* ``resultIndex`` is a new parameter specific to glue connector. Stores the results of queries executed on the data source. If unavailable, it defaults to .query_execution_result. * ``glue.auth.type`` [Required] * This parameters provides the authentication type information required for execution engine to connect to glue. * S3 Glue connector currently only supports ``iam_role`` authentication and the below parameters is required. @@ -71,11 +73,19 @@ Glue datasource configuration:: "glue.indexstore.opensearch.uri": "http://adsasdf.amazonopensearch.com:9200", "glue.indexstore.opensearch.auth" :"awssigv4", "glue.indexstore.opensearch.auth.region" :"awssigv4", - } + }, + "resultIndex": "query_execution_result" }] -Sample s3Glue datasource queries -================================ - +Sample s3Glue datasource queries APIS +===================================== + +Sample Queries + +* Select Query : ``select * from mys3.default.http_logs limit 1"`` +* Create Covering Index Query: ``create index clientip_year on my_glue.default.http_logs (clientip, year) WITH (auto_refresh=true)`` +* Create Skipping Index: ``create skipping index on mys3.default.http_logs (status VALUE_SET)`` +These queries would work only top of async queries. Documentation: `Async Query APIs <../../../interfaces/asyncqueryinterface.rst>`_ +Documentation for Index Queries: https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md \ No newline at end of file