From 290e0da4188543493e1003314d65ab368efa337c Mon Sep 17 00:00:00 2001 From: rpourzand Date: Fri, 18 Aug 2023 15:26:56 -0700 Subject: [PATCH 1/4] Update sl-partner-integration-guide.md - added a note on where filters/string when we release it - added a note about graphql so we can add it once shipped --- .../dbt-ecosystem/sl-partner-integration-guide.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md index decad95a516..615e5729ff8 100644 --- a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md +++ b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md @@ -21,7 +21,7 @@ This is an evolving guide that is meant to provide recommendations based on our To build a dbt Semantic Layer integration: -- Initially, we recommend building an integration with the [JDBC](/docs/dbt-cloud-apis/sl-jdbc) followed by enhancements of additional features. Refer to the dedicated [dbt Semantic Layer API](/docs/dbt-cloud-apis/sl-api-overview) for more technical integration details. +- We offer a [JDBC](/docs/dbt-cloud-apis/sl-jdbc) API (and will soon offer a GraphQL API). Refer to the dedicated [dbt Semantic Layer API](/docs/dbt-cloud-apis/sl-api-overview) for more technical integration details. - Familiarize yourself with the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) and [MetricFlow](/docs/build/about-metricflow)'s key concepts. There are two main objects: @@ -30,11 +30,9 @@ To build a dbt Semantic Layer integration: ### Connection parameters -The dbt Semantic Layer authenticates with `environmentId`, `SERVICE_TOKEN`, and `host`. +The dbt Semantic Layer APIs authenticate with `environmentId`, `SERVICE_TOKEN`, and `host`. -This applies to the dbt Semantic Layer APIs, which all currently use different host names. We recommend you provide users with separate input fields with these components (which dbt Cloud provides). - -For [JDBC](/docs/dbt-cloud-apis/sl-jdbc), you can construct the JDBC URL from these inputs. Or, you could request the full URL string. +We recommend you provide users with separate input fields with these components for authentication (dbt Cloud will surface these parameters for the user). ## Best practices on exposing metrics: @@ -114,8 +112,12 @@ For better analysis, it's best to have the context of the metrics close to where ### A note on transparency and using explain -For transparency and additional context, we recommend you have an easy way for the user to obtain the SQL that MetricFlow generates. You can do this by appending `explain=True` to any query. This is incredibly powerful because we want to be very transparent to the user about what we're doing and do not want to be a black box. This would be mostly a power user / technical user functionality. +For transparency and additional context, we recommend you have an easy way for the user to obtain the SQL that MetricFlow generates. You can do this by appending `explain=True` to any query. This is incredibly powerful because we want to be very transparent to the user about what we're doing and do not want to be a black box. This would be mostly beneficial to a technical user. + + +### A note on Where Filters +In the cases where our APIs support either a string or a filter list for the `where` clause, we always recommend that your application utilizes the filter list in order to gain maximum pushdown benefits. The `where` string may be more intuititive for users writing queries during testing, but it will not have the performance benefits of the filter list in a production environment. ### Example stages of an integration From 364c2d120764c19c018f600a067ddbe64680ea39 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 23 Aug 2023 14:15:20 +0100 Subject: [PATCH 2/4] Update sl-partner-integration-guide.md move the 'a note' h3s --- .../sl-partner-integration-guide.md | 37 +++++++++---------- 1 file changed, 17 insertions(+), 20 deletions(-) diff --git a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md index 615e5729ff8..31e00e41339 100644 --- a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md +++ b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md @@ -4,7 +4,6 @@ id: "sl-partner-integration-guide" description: Learn about partner integration guidelines, roadmap, and connectivity. --- - import NewChanges from '/snippets/_new-sl-changes.md'; @@ -37,7 +36,7 @@ We recommend you provide users with separate input fields with these components ## Best practices on exposing metrics: -Best practices for exposing metrics is summarized into five themes: +Best practices for exposing metrics are summarized into five themes: - [Governance](#governance-and-traceability) — Recommendations on how to establish guardrails for governed data work. - [Discoverability](#discoverability) — Recommendations on how to make user-friendly data interactions. @@ -49,7 +48,7 @@ Best practices for exposing metrics is summarized into five themes: When working with more governed data, it's essential to establish clear guardrails. Here are some recommendations: -- **Aggregations control** — Users shouldn't generally be allowed to modify aggregations unless they are performing post-processing calculations on data from the Semantic Layer (such as year over year analysis). +- **Aggregations control** — Users shouldn't generally be allowed to modify aggregations unless they are performing post-processing calculations on data from the Semantic Layer (such as year-over-year analysis). - **Time series alignment and using metric_time** — Make sure users view metrics across the correct time series. When displaying metric graphs, using a non-default time aggregation dimension might lead to misleading interpretations. While users can still group by other time dimensions, they should be careful not to create trend lines with incorrect time axes.

When looking at one or multiple metrics, users should use `metric_time` as the main time dimension to guarantee they are looking at the right time series for the metric(s).

As such, when building an application, we recommend exposing `metric_time` as a separate, "special" time dimension on its own. This dimension is always going to align with all metrics and be common across them. Other time dimensions can still be looked at and grouped by, but having a clear delineation between the `metric_time` dimension and the other time dimensions is clarifying so that people do not confuse how metrics should be plotted.

Also, when a user requests a time granularity change for the main time series, the query that your application runs should use `metric_time` as this will always give you the correct slice. Note that when looking at a single metric, the primary time dimension and `metric_time` are equivalent. @@ -76,25 +75,25 @@ By implementing these recommendations, the data interaction process becomes more We recommend organizing metrics and dimensions in ways that a non-technical user can understand the data model, without needing much context: -- **Organizing Dimensions** — To help non-technical users understand the data model better, we recommend organizing dimensions based on the entity they originated from. For example, consider dimensions like `user__country` and `product__category`.

You can create groups by extracting `user` and `product` and then nest the respective dimensions under each group. This way, dimensions align with the entity or semantic model they belong to and makes them them more user-friendly and accessible. +- **Organizing Dimensions** — To help non-technical users understand the data model better, we recommend organizing dimensions based on the entity they originated from. For example, consider dimensions like `user__country` and `product__category`.

You can create groups by extracting `user` and `product` and then nest the respective dimensions under each group. This way, dimensions align with the entity or semantic model they belong to and make them more user-friendly and accessible. -- **Organizing Metrics** — The goal is to organize metrics into a hierarchy in our configurations, instead of presenting them in a long list.

This hierarchy helps you organize metrics based on a specific criteria, such as business unit or team. By providing this structured organization, users can find and navigate metrics more efficiently, enhancing their overall data analysis experience. +- **Organizing Metrics** — The goal is to organize metrics into a hierarchy in our configurations, instead of presenting them in a long list.

This hierarchy helps you organize metrics based on specific criteria, such as business unit or team. By providing this structured organization, users can find and navigate metrics more efficiently, enhancing their overall data analysis experience. ### Query flexibility Allow users to query either one metric alone without dimensions or multiple metrics with dimensions. -- Allow toggling between metrics / dimensions seamlessly. +- Allow toggling between metrics/dimensions seamlessly. - Be clear on exposing what dimensions are queryable with what metrics and hide things that don’t apply, and vice versa. - Only expose time granularities (monthly, daily, yearly) that match the available metrics. * For example, if a dbt model and its resulting semantic model have a monthly granularity, make sure querying data with a 'daily' granularity isn't available to the user. Our APIs have functionality that will help you surface the correct granularities -- We recommend that time granularity is treated as a general time dimension-specific concept and that it can be applied to more than just the primary aggregation (or `metric_time`). Consider a situation where a user wants to look at `sales` over time by `customer signup month`; in this situation, having the ability able to apply granularities to both time dimensions is crucial. Note: initially, as a starting point, it makes sense to only support `metric_time` or the primary time dimension, but we recommend expanding that as your solution evolves. +- We recommend that time granularity is treated as a general time dimension-specific concept and that it can be applied to more than just the primary aggregation (or `metric_time`). Consider a situation where a user wants to look at `sales` over time by `customer signup month`; in this situation, having the ability to apply granularities to both time dimensions is crucial. Note: Initially, as a starting point, it makes sense to only support `metric_time` or the primary time dimension, but we recommend expanding that as your solution evolves. - You should allow users to filter on date ranges and expose a calendar and nice presets for filtering these. - * For example: last 30 days, last week etc. + * For example, last 30 days, last week, and so on. ### Context and interpretation @@ -110,15 +109,6 @@ For better analysis, it's best to have the context of the metrics close to where - Allow for creating other metadata that’s useful for the metric. We can provide some of this information in our configuration (Display name, Default Granularity for View, Default Time range), but there may be other metadata that your tool wants to provide to make the metric richer. -### A note on transparency and using explain - -For transparency and additional context, we recommend you have an easy way for the user to obtain the SQL that MetricFlow generates. You can do this by appending `explain=True` to any query. This is incredibly powerful because we want to be very transparent to the user about what we're doing and do not want to be a black box. This would be mostly beneficial to a technical user. - - -### A note on Where Filters - -In the cases where our APIs support either a string or a filter list for the `where` clause, we always recommend that your application utilizes the filter list in order to gain maximum pushdown benefits. The `where` string may be more intuititive for users writing queries during testing, but it will not have the performance benefits of the filter list in a production environment. - ### Example stages of an integration These are recommendations on how to evolve a Semantic Layer integration and not a strict runbook. @@ -127,7 +117,7 @@ These are recommendations on how to evolve a Semantic Layer integration and not * Supporting and using the new [JDBC](/docs/dbt-cloud-apis/sl-jdbc) is the first step. Refer to the [dbt Semantic Layer API](/docs/dbt-cloud-apis/sl-api-overview) for more technical details. **Stage 2 - More discoverability and basic querying** -* Support listing metrics defined in project +* Support listing metrics defined in the project * Listing available dimensions based on one or many metrics * Querying defined metric values on their own or grouping by available dimensions * Display metadata from [Discovery API](/docs/dbt-cloud-apis/discovery-api) and other context @@ -135,7 +125,7 @@ These are recommendations on how to evolve a Semantic Layer integration and not **Stage 3 - More querying flexibility and better user experience (UX)** * More advanced filtering * Time filters with good presets/calendar UX - * Filtering metrics on pre-populated set of dimensions values + * Filtering metrics on a pre-populated set of dimension values * Make dimension values more user-friendly by organizing them effectively * Intelligent filtering of metrics based on available dimensions and vice versa @@ -146,7 +136,14 @@ These are recommendations on how to evolve a Semantic Layer integration and not * Querying dimensions without metrics and other more advanced querying functionality * Suggest metrics to users based on teams/identity, and so on. -
+### A note on transparency and using explain + +For transparency and additional context, we recommend you have an easy way for the user to obtain the SQL that MetricFlow generates. You can do this by appending `explain=True` to any query. This is incredibly powerful because we want to be very transparent to the user about what we're doing and do not want to be a black box. This would be mostly beneficial to a technical user. + + +### A note on where filters + +In the cases where our APIs support either a string or a filter list for the `where` clause, we always recommend that your application utilizes the filter list in order to gain maximum pushdown benefits. The `where` string may be more intuitive for users writing queries during testing, but it will not have the performance benefits of the filter list in a production environment. ## Related docs From 4ed2e0efd5543ab6d54b71e74e533c24e55065d6 Mon Sep 17 00:00:00 2001 From: rpourzand Date: Mon, 28 Aug 2023 15:18:15 -0700 Subject: [PATCH 3/4] Update sl-partner-integration-guide.md --- .../docs/guides/dbt-ecosystem/sl-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md index 31e00e41339..c4443885168 100644 --- a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md +++ b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md @@ -138,7 +138,7 @@ These are recommendations on how to evolve a Semantic Layer integration and not ### A note on transparency and using explain -For transparency and additional context, we recommend you have an easy way for the user to obtain the SQL that MetricFlow generates. You can do this by appending `explain=True` to any query. This is incredibly powerful because we want to be very transparent to the user about what we're doing and do not want to be a black box. This would be mostly beneficial to a technical user. +For transparency and additional context, we recommend you have an easy way for the user to obtain the SQL that MetricFlow generates. Depending on what API you are using, you can do this by using our explain parameter (JDBC) or compileSQL mutation (GraphQL). This is incredibly powerful because we want to be very transparent to the user about what we're doing and do not want to be a black box. This would be mostly beneficial to a technical user. ### A note on where filters From 29e4b0da448fc7b6b8d74da90538c7223e9db521 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 29 Aug 2023 09:58:46 +0100 Subject: [PATCH 4/4] Update sl-partner-integration-guide.md turn exmple h3 into h2 --- .../guides/dbt-ecosystem/sl-partner-integration-guide.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md index c4443885168..d707200cd46 100644 --- a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md +++ b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md @@ -34,7 +34,7 @@ The dbt Semantic Layer APIs authenticate with `environmentId`, `SERVICE_TOKEN`, We recommend you provide users with separate input fields with these components for authentication (dbt Cloud will surface these parameters for the user). -## Best practices on exposing metrics: +## Best practices on exposing metrics Best practices for exposing metrics are summarized into five themes: @@ -48,7 +48,7 @@ Best practices for exposing metrics are summarized into five themes: When working with more governed data, it's essential to establish clear guardrails. Here are some recommendations: -- **Aggregations control** — Users shouldn't generally be allowed to modify aggregations unless they are performing post-processing calculations on data from the Semantic Layer (such as year-over-year analysis). +- **Aggregations control** — Users shouldn't generally be allowed to modify aggregations unless they perform post-processing calculations on Semantic Layer data (such as year-over-year analysis). - **Time series alignment and using metric_time** — Make sure users view metrics across the correct time series. When displaying metric graphs, using a non-default time aggregation dimension might lead to misleading interpretations. While users can still group by other time dimensions, they should be careful not to create trend lines with incorrect time axes.

When looking at one or multiple metrics, users should use `metric_time` as the main time dimension to guarantee they are looking at the right time series for the metric(s).

As such, when building an application, we recommend exposing `metric_time` as a separate, "special" time dimension on its own. This dimension is always going to align with all metrics and be common across them. Other time dimensions can still be looked at and grouped by, but having a clear delineation between the `metric_time` dimension and the other time dimensions is clarifying so that people do not confuse how metrics should be plotted.

Also, when a user requests a time granularity change for the main time series, the query that your application runs should use `metric_time` as this will always give you the correct slice. Note that when looking at a single metric, the primary time dimension and `metric_time` are equivalent. @@ -109,7 +109,7 @@ For better analysis, it's best to have the context of the metrics close to where - Allow for creating other metadata that’s useful for the metric. We can provide some of this information in our configuration (Display name, Default Granularity for View, Default Time range), but there may be other metadata that your tool wants to provide to make the metric richer. -### Example stages of an integration +## Example stages of an integration These are recommendations on how to evolve a Semantic Layer integration and not a strict runbook.