diff --git a/website/docs/docs/build/sl-getting-started.md b/website/docs/docs/build/sl-getting-started.md
index c638470d0ff..c0bf59ae0c2 100644
--- a/website/docs/docs/build/sl-getting-started.md
+++ b/website/docs/docs/build/sl-getting-started.md
@@ -5,7 +5,7 @@ description: "Learn how to create your first semantic model and metric."
sidebar_label: Get started with MetricFlow
tags: [Metrics, Semantic Layer]
meta:
- api_name: dbt Semantic Layer API
+ api_name: dbt Semantic Layer APIs
---
import InstallMetricFlow from '/snippets/_sl-install-metricflow.md';
@@ -85,7 +85,7 @@ You can query your metrics in a JDBC-enabled tool or use existing first-class in
You must have a dbt Cloud Team or Enterprise [multi-tenant](/docs/cloud/about-cloud/regions-ip-addresses) deployment, hosted in North America. (Additional region support coming soon)
-- To learn how to use the JDBC API and what tools you can query it with, refer to the {frontMatter.meta.api_name}.
+- To learn how to use the JDBC or GraphQL API and what tools you can query it with, refer to the {frontMatter.meta.api_name}.
* To authenticate, you need to [generate a service token](/docs/dbt-cloud-apis/service-tokens) with Semantic Layer Only and Metadata Only permissions.
* Refer to the [SQL query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata) to query metrics using the API.
diff --git a/website/docs/docs/dbt-cloud-apis/sl-api-overview.md b/website/docs/docs/dbt-cloud-apis/sl-api-overview.md
index efe54cbd833..42416765904 100644
--- a/website/docs/docs/dbt-cloud-apis/sl-api-overview.md
+++ b/website/docs/docs/dbt-cloud-apis/sl-api-overview.md
@@ -1,7 +1,7 @@
---
-title: "Semantic Layer API"
+title: "Semantic Layer APIs"
id: sl-api-overview
-description: "Integrate and query using the Semantic Layer API."
+description: "Integrate and query metrics and dimensions in downstream tools using the Semantic Layer APIs"
tags: [Semantic Layer, API]
hide_table_of_contents: true
---
@@ -36,9 +36,8 @@ product="dbt Semantic Layer"
plan="dbt Cloud Team and Enterprise"
instance="hosted in North America"
/>
-
-
+
-
"}
```
-Each GQL request also comes with a dbt Cloud environmentId. The API uses both the service token in the header and environmentId for authentication.
+Each GQL request also requires a dbt Cloud `environmentId`. The API uses both the service token in the header and environmentId for authentication.
+
+### Metadata calls
+
+**Fetch data platform dialect**
+
+In some cases in your application, it may be useful to know the dialect or data platform that's internally used for the dbt Semantic Layer connection (such as if you are building `where` filters from a user interface rather than user-inputted SQL).
-### Metric metadata calls
+The GraphQL API has an easy way to fetch this with the following query:
-Use the following example calls to provide you with an idea of the types of commands you can use:
+```graphql
+{
+ environmentInfo(environmentId: BigInt!) {
+ dialect
+ }
+}
+```
**Fetch available metrics**
```graphql
-metrics(environmentId: Int!): [Metric!]!
+metrics(environmentId: BigInt!): [Metric!]!
```
**Fetch available dimensions for metrics**
```graphql
dimensions(
-environmentId: Int!
-metrics: [String!]!
+ environmentId: BigInt!
+ metrics: [MetricInput!]!
): [Dimension!]!
```
-**Fetch available time granularities given metrics**
+**Fetch available granularities given metrics**
+
+Note: This call for `queryableGranularities` returns only queryable granularities for metric time - the primary time dimension across all metrics selected.
```graphql
queryableGranularities(
-environmentId: Int!
-metrics: [String!]!
+ environmentId: BigInt!
+ metrics: [MetricInput!]!
): [TimeGranularity!]!
```
-**Fetch available metrics given a set of a dimensions**
+You can also get queryable granularities for all other dimensions using the `dimensions` call:
+
+```graphql
+{
+ dimensions(environmentId: BigInt!, metrics:[{name:"order_total"}]) {
+ name
+ queryableGranularities # --> ["DAY", "WEEK", "MONTH", "QUARTER", "YEAR"]
+ }
+}
+```
+
+You can also optionally access it from the metrics endpoint:
+
+```graphql
+{
+ metrics(environmentId: BigInt!) {
+ name
+ dimensions {
+ name
+ queryableGranularities
+ }
+ }
+}
+```
+
+**Fetch measures**
+
+```graphql
+{
+ measures(environmentId: BigInt!, metrics: [{name:"order_total"}]) {
+ name
+ aggTimeDimension
+ }
+}
+```
+
+`aggTimeDimension` tells you the name of the dimension that maps to `metric_time` for a given measure. You can also query `measures` from the `metrics` endpoint, which allows you to see what dimensions map to `metric_time` for a given metric:
+
+```graphql
+{
+ metrics(environmentId: BigInt!) {
+ measures {
+ name
+ aggTimeDimension
+ }
+ }
+}
+```
+
+**Fetch available metrics given a set of dimensions**
```graphql
metricsForDimensions(
-environmentId: Int!
-dimensions: [String!]!
+ environmentId: BigInt!
+ dimensions: [GroupByInput!]!
): [Metric!]!
```
-**Fetch dimension values for metrics and a given dimension**
+**Create Dimension Values query**
```graphql
-dimensionValues(
-environmentId: Int!
-metrics: [String!]!
-dimension: String!
-```
-### Metric value query parameters
+mutation createDimensionValuesQuery(
+ environmentId: BigInt!
+ metrics: [MetricInput!]
+ groupBy: [GroupByInput!]!
+): CreateDimensionValuesQueryResult!
+
+```
-The mutation is `createQuery`. The parameters are as follows:
+**Create Metric query**
```graphql
createQuery(
-environmentId: Int!
-metrics: [String!]!
-dimensions: [String!] = null
-limit: Int = null
-startTime: String = null
-endTime: String = null
-where: String = null
-order: [String!] = null
-): String
+ environmentId: BigInt!
+ metrics: [MetricInput!]!
+ groupBy: [GroupByInput!] = null
+ limit: Int = null
+ where: [WhereInput!] = null
+ order: [OrderByInput!] = null
+): CreateQueryResult
+```
+
+```graphql
+MetricInput {
+ name: String!
+}
+
+GroupByInput {
+ name: String!
+ grain: TimeGranularity = null
+}
+
+WhereInput {
+ sql: String!
+}
+
+OrderByinput { # -- pass one and only one of metric or groupBy
+ metric: MetricInput = null
+ groupBy: GroupByInput = null
+ descending: Boolean! = false
+}
+```
+
+**Fetch query result**
+
+```graphql
+query(
+ environmentId: BigInt!
+ queryId: String!
+): QueryResult!
+```
+
+**Metric Types**
+
+```graphql
+Metric {
+ name: String!
+ description: String
+ type: MetricType!
+ typeParams: MetricTypeParams!
+ filter: WhereFilter
+ dimensions: [Dimension!]!
+ queryableGranularities: [TimeGranularity!]!
+}
+```
+
+```
+MetricType = [SIMPLE, RATIO, CUMULATIVE, DERIVED]
+```
+
+**Metric Type parameters**
+
+```graphql
+MetricTypeParams {
+ measure: MetricInputMeasure
+ inputMeasures: [MetricInputMeasure!]!
+ numerator: MetricInput
+ denominator: MetricInput
+ expr: String
+ window: MetricTimeWindow
+ grainToDate: TimeGranularity
+ metrics: [MetricInput!]
+}
+```
+
+
+**Dimension Types**
+
+```graphql
+Dimension {
+ name: String!
+ description: String
+ type: DimensionType!
+ typeParams: DimensionTypeParams
+ isPartition: Boolean!
+ expr: String
+ queryableGranularities: [TimeGranularity!]!
+}
+```
+
+```
+DimensionType = [CATEGORICAL, TIME]
+```
+
+### Create Query examples
+
+The following section provides query examples for the GraphQL API, such as how to query metrics, dimensions, where filters, and more.
+
+**Query two metrics grouped by time**
+
+```graphql
+mutation {
+ createQuery(
+ environmentId: BigInt!
+ metrics: [{name: "food_order_amount"}]
+ groupBy: [{name: "metric_time}, {name: "customer__customer_type"}]
+ ) {
+ queryId
+ }
+}
+```
+
+**Query with a time grain**
+
+```graphql
+mutation {
+ createQuery(
+ environmentId: BigInt!
+ metrics: [{name: "order_total"}]
+ groupBy: [{name: "metric_time", grain: "month"}]
+ ) {
+ queryId
+ }
+}
+```
+
+Note that when using granularity in the query, the output of a time dimension with a time grain applied to it always takes form of dimension name appended with a double underscore and the granularity level - `{time_dimension_name}__{DAY|WEEK|MONTH|QUARTER|YEAR}`. Even if no granularity is specified, it will also always have a granularity appended to it and will default to the lowest available (usually daily for most data sources). It is encouraged to specify a granularity when using time dimensions so that there won't be any unexpected results with the output data.
+
+**Query two metrics with a categorical dimension**
+
+```graphql
+mutation {
+ createQuery(
+ environmentId: BigInt!
+ metrics: [{name: "food_order_amount"}, {name: "order_gross_profit"}]
+ groupBy: [{name: "metric_time, grain: "month"}, {name: "customer__customer_type"}]
+ ) {
+ queryId
+ }
+}
+```
+
+**Query with a where filter**
+
+The `where` filter takes a list argument (or a string for a single input). Depending on the object you are filtering on, there are a couple of parameters:
+
+ - `Dimension()` — Used for any categorical or time dimensions. If used for a time dimension, granularity is required. For example, `Dimension('metric_time').grain('week')` or `Dimension('customer__country')`.
+
+- `Entity()` — Used for entities like primary and foreign keys, such as `Entity('order_id')`.
+
+Note: If you prefer a more strongly typed `where` clause, you can optionally use `TimeDimension()` to separate out categorical dimensions from time ones. The `TimeDimension` input takes the time dimension name and also requires granularity. For example, `TimeDimension('metric_time', 'MONTH')`.
+
+```graphql
+mutation {
+ createQuery(
+ environmentId: BigInt!
+ metrics:[{name: "order_total"}]
+ groupBy:[{name: "customer__customer_type"}, {name: "metric_time", grain: "month"}]
+ where:[{sql: "{{ Dimension('customer__customer_type') }} = 'new'"}, {sql:"{{ Dimension('metric_time').grain('month') }} > '2022-10-01'"}]
+ ) {
+ queryId
+ }
+}
+```
+
+**Query with Order**
+
+```graphql
+mutation {
+ createQuery(
+ environmentId: BigInt!
+ metrics: [{name: "order_total"}]
+ groupBy: [{name: "metric_time", grain: "month"}]
+ orderBy: [{metric: {name: "order_total"}}, {groupBy: {name: "metric_time", grain: "month"}, descending:true}]
+ ) {
+ queryId
+ }
+}
+```
+
+
+**Query with Limit**
+
+```graphql
+mutation {
+ createQuery(
+ environmentId: BigInt!
+ metrics: [{name:"food_order_amount"}, {name: "order_gross_profit"}]
+ groupBy: [{name:"metric_time, grain: "month"}, {name: "customer__customer_type"}]
+ limit: 10
+ ) {
+ queryId
+ }
+}
+```
+
+**Query with Explain**
+
+This takes the same inputs as the `createQuery` mutation.
+
+```graphql
+mutation {
+ compileSql(
+ environmentId: BigInt!
+ metrics: [{name:"food_order_amount"} {name:"order_gross_profit"}]
+ groupBy: [{name:"metric_time, grain:"month"}, {name:"customer__customer_type"}]
+ ) {
+ sql
+ }
+}
+```
+
+### Output format and pagination
+
+**Output format**
+
+By default, the output is in Arrow format. You can switch to JSON format using the following parameter. However, due to performance limitations, we recommend using the JSON parameter for testing and validation. The JSON received is a base64 encoded string. To access it, you can decode it using a base64 decoder. The JSON is created from pandas, which means you can change it back to a dataframe using `pandas.read_json(json, orient="table")`. Or you can work with the data directly using `json["data"]`, and find the table schema using `json["schema"]["fields"]`. Alternatively, you can pass `encoded:false` to the jsonResult field to get a raw JSON string directly.
+
+
+```graphql
+{
+ query(environmentId: BigInt!, queryId: Int!, pageNum: Int! = 1) {
+ sql
+ status
+ error
+ totalPages
+ arrowResult
+ jsonResult(orient: PandasJsonOrient! = TABLE, encoded: Boolean! = true)
+ }
+}
```
+The results default to the table but you can change it to any [pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_json.html) supported value.
+
+**Pagination**
+
+By default, we return 1024 rows per page. If your result set exceeds this, you need to increase the page number using the `pageNum` option.
+
+### Run a Python query
+
+The `arrowResult` in the GraphQL query response is a byte dump, which isn't visually useful. You can convert this byte data into an Arrow table using any Arrow-supported language. Refer to the following Python example explaining how to query and decode the arrow result:
+
+
+```python
+import base64
+import pyarrow as pa
+
+headers = {"Authorization":"Bearer "}
+query_result_request = """
+{
+ query(environmentId: 70, queryId: "12345678") {
+ sql
+ status
+ error
+ arrowResult
+ }
+}
+"""
+
+gql_response = requests.post(
+ "http://localhost:8000/graphql",
+ json={"query": query_result_request},
+ headers=headers,
+)
+
+"""
+gql_response.json() =>
+{
+ "data": {
+ "query": {
+ "sql": "SELECT\n ordered_at AS metric_time__day\n , SUM(order_total) AS order_total\nFROM semantic_layer.orders orders_src_1\nGROUP BY\n ordered_at",
+ "status": "SUCCESSFUL",
+ "error": null,
+ "arrowResult": "arrow-byte-data"
+ }
+ }
+}
+"""
+
+def to_arrow_table(byte_string: str) -> pa.Table:
+ """Get a raw base64 string and convert to an Arrow Table."""
+ with pa.ipc.open_stream(base64.b64decode(res)) as reader:
+ return pa.Table.from_batches(reader, reader.schema)
+
+
+arrow_table = to_arrow_table(gql_response.json()["data"]["query"]["arrowResult"])
+
+# Perform whatever functionality is available, like convert to a pandas table.
+print(arrow_table.to_pandas())
+"""
+order_total ordered_at
+ 3 2023-08-07
+ 112 2023-08-08
+ 12 2023-08-09
+ 5123 2023-08-10
+"""
+```
\ No newline at end of file
diff --git a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
index fa6fe2c3a1b..b084dedc305 100644
--- a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
+++ b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
@@ -5,7 +5,7 @@ description: "Discover the diverse range of partners that seamlessly integrate w
tags: [Semantic Layer]
sidebar_label: "Available integrations"
meta:
- api_name: dbt Semantic Layer API
+ api_name: dbt Semantic Layer APIs
---
@@ -32,8 +32,8 @@ You can create custom integrations using different languages and tools. We suppo
## Related docs
-- {frontMatter.meta.api_name} to learn how to integrate with the JDBC to query your metrics in downstream tools.
-- [dbt Semantic Layer API query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata)
+- {frontMatter.meta.api_name} to learn how to integrate with JDBC and GraphQL to query your metrics in downstream tools.
+- [dbt Semantic Layer APIs query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata)
diff --git a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
index 8d073297f48..76753b41ffa 100644
--- a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
@@ -59,7 +59,7 @@ instance="hosted in North America"
icon="dbt-bit"/>
diff --git a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
index 542ab4896bb..3bbc11cea3f 100644
--- a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
@@ -5,7 +5,7 @@ description: "Use this guide to build and define metrics, set up the dbt Semanti
sidebar_label: "Get started with the dbt Semantic Layer"
tags: [Semantic Layer]
meta:
- api_name: dbt Semantic Layer API
+ api_name: dbt Semantic Layer APIs
---
@@ -92,10 +92,10 @@ You can query your metrics in a JDBC-enabled tool or use existing first-class in
You must have a dbt Cloud Team or Enterprise [multi-tenant](/docs/cloud/about-cloud/regions-ip-addresses) deployment, hosted in North America (Additional region support coming soon).
-- To learn how to use the JDBC API and what tools you can query it with, refer to the {frontMatter.meta.api_name}.
+- To learn how to use the JDBC or GraphQL API and what tools you can query it with, refer to the {frontMatter.meta.api_name}.
* To authenticate, you need to [generate a service token](/docs/dbt-cloud-apis/service-tokens) with Semantic Layer Only and Metadata Only permissions.
- * Refer to the [SQL query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata) to query metrics using the API.
+ * Refer to the [SQL query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata) to query metrics using the APIs.
- To learn more about the sophisticated integrations that connect to the dbt Semantic Layer, refer to [Available integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations) for more info.
diff --git a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md
index 39e93987b20..68037bfd0cd 100644
--- a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md
+++ b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md
@@ -114,7 +114,7 @@ For better analysis, it's best to have the context of the metrics close to where
These are recommendations on how to evolve a Semantic Layer integration and not a strict runbook.
**Stage 1 - The basic**
-* Supporting and using the new [JDBC](/docs/dbt-cloud-apis/sl-jdbc) is the first step. Refer to the [dbt Semantic Layer API](/docs/dbt-cloud-apis/sl-api-overview) for more technical details.
+* Supporting and using [JDBC](/docs/dbt-cloud-apis/sl-jdbc) or [GraphQL](/docs/dbt-cloud-apis/sl-graphql) is the first step. Refer to the [dbt Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview) for more technical details.
**Stage 2 - More discoverability and basic querying**
* Support listing metrics defined in the project
diff --git a/website/sidebars.js b/website/sidebars.js
index 2f3ebaed4e6..8b162f67af3 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -517,6 +517,7 @@ const sidebarSettings = {
link: { type: "doc", id: "docs/dbt-cloud-apis/sl-api-overview" },
items: [
"docs/dbt-cloud-apis/sl-jdbc",
+ "docs/dbt-cloud-apis/sl-graphql",
"docs/dbt-cloud-apis/sl-manifest",
],
},
diff --git a/website/snippets/_new-sl-changes.md b/website/snippets/_new-sl-changes.md
index fa7c7abf743..6eca327001a 100644
--- a/website/snippets/_new-sl-changes.md
+++ b/website/snippets/_new-sl-changes.md
@@ -3,6 +3,6 @@
The dbt Semantic Layer has been re-released with [significant improvements](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
-The new version is available in [public beta](/docs/dbt-versions/release-notes/Aug-2023/sl-revamp-beta#public-beta) and introduces [MetricFlow](/docs/build/about-metricflow), an essential component. It also includes new semantic elements, better governance, improved efficiency, easier data access, and new Semantic Layer API.
+The new version is available in [public beta](/docs/dbt-versions/release-notes/Aug-2023/sl-revamp-beta#public-beta) and introduces [MetricFlow](/docs/build/about-metricflow), an essential component. It also includes new semantic elements, better governance, improved efficiency, easier data access, and new dbt Semantic Layer APIs.
:::
diff --git a/website/snippets/_new-sl-setup.md b/website/snippets/_new-sl-setup.md
index 9f1fcef0fb6..b802db9c5ae 100644
--- a/website/snippets/_new-sl-setup.md
+++ b/website/snippets/_new-sl-setup.md
@@ -25,7 +25,7 @@ If you're using the legacy Semantic Layer, we **highly** recommend you [upgrade
5. Select the deployment environment you want for the Semantic Layer and click **Save**.
-6. After saving it, you'll be provided with the connection information that allows you to connect to downstream tools. If your tool supports JDBC, save the JDBC URL or individual components (like environment id and host).
+6. After saving it, you'll be provided with the connection information that allows you to connect to downstream tools. If your tool supports JDBC, save the JDBC URL or individual components (like environment id and host). If it uses the GraphQL API, save the GraphQL API host information instead.
diff --git a/website/snippets/_sl-partner-links.md b/website/snippets/_sl-partner-links.md
index 534fa3e29cf..e9cc6af3564 100644
--- a/website/snippets/_sl-partner-links.md
+++ b/website/snippets/_sl-partner-links.md
@@ -1,4 +1,4 @@
-
+
The dbt Semantic Layer integrations are capable of querying dbt metrics, importing definitions, surfacing the underlying data in partner tools, and more. These are the following tools that integrate with the dbt Semantic Layer:
1. **Mode** — To learn more about integrating with Mode, check out their [documentation](https://mode.com/help/articles/supported-databases/#dbt-semantic-layer) for more info.
diff --git a/website/snippets/_sl-plan-info.md b/website/snippets/_sl-plan-info.md
index db4aa0bfc25..5fba18de6bb 100644
--- a/website/snippets/_sl-plan-info.md
+++ b/website/snippets/_sl-plan-info.md
@@ -1 +1,2 @@
To define and query metrics with the {props.product}, you must be on a {props.plan} multi-tenant plan, {props.instance} (Additional region support coming soon).
The re-released dbt Semantic Layer is available on dbt v1.6 or higher. dbt Core users can use the MetricFlow CLI to define metrics in their local project, but won't be able to dynamically query them with integrated tools.
+
diff --git a/website/snippets/_sl-test-and-query-metrics.md b/website/snippets/_sl-test-and-query-metrics.md
index 323ba2d83ad..b250fac4f31 100644
--- a/website/snippets/_sl-test-and-query-metrics.md
+++ b/website/snippets/_sl-test-and-query-metrics.md
@@ -2,7 +2,7 @@
Support for testing or querying metrics in the dbt Cloud IDE is not available in the current beta but is coming soon.
-You can use the **Preview** or **Compile** buttons in the IDE to run semantic validations and make sure your metrics are defined. You can [dynamically query metrics](#connect-and-query-api) with integrated tools on a dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) plan using the [Semantic Layer API](/docs/dbt-cloud-apis/sl-api-overview).
+You can use the **Preview** or **Compile** buttons in the IDE to run semantic validations and make sure your metrics are defined. You can [dynamically query metrics](#connect-and-query-api) with integrated tools on a dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) plan using the [dbt Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview).
Currently, you can define and test metrics using the MetricFlow CLI. dbt Cloud IDE support is coming soon. Alternatively, you can test using SQL client tools like DataGrip, DBeaver, or RazorSQL.
@@ -28,4 +28,4 @@ MetricFlow needs a `semantic_manifest.json` in order to build a semantic graph.
5. Run `mf validate-configs` to run validation on your semantic models and metrics.
6. Commit and merge the code changes that contain the metric definitions.
-To streamline your metric querying process, you can connect to the [dbt Semantic Layer API](/docs/dbt-cloud-apis/sl-api-overview) to access your metrics programmatically. For SQL syntax, refer to [Querying the API for metric metadata](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata) to query metrics using the API.
+To streamline your metric querying process, you can connect to the [dbt Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview) to access your metrics programmatically. For SQL syntax, refer to [Querying the API for metric metadata](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata) to query metrics using the API.