diff --git a/.github/ISSUE_TEMPLATE/internal-orch-team.yml b/.github/ISSUE_TEMPLATE/internal-orch-team.yml deleted file mode 100644 index 8c4d61df10c..00000000000 --- a/.github/ISSUE_TEMPLATE/internal-orch-team.yml +++ /dev/null @@ -1,49 +0,0 @@ -name: Orchestration team - Request changes to docs -description: File a docs update request that is not already tracked in Orch team's Release Plans (Notion database). -labels: ["content","internal-orch-team"] -body: - - type: markdown - attributes: - value: | - * You can ask questions or submit ideas for the dbt docs in [Issues](https://github.com/dbt-labs/docs-internal/issues/new/choose) - * Before you file an issue read the [Contributing guide](https://github.com/dbt-labs/docs-internal#contributing). - * Check to make sure someone hasn't already opened a similar [issue](https://github.com/dbt-labs/docs-internal/issues). - - - type: checkboxes - id: contributions - attributes: - label: Contributions - description: Please read the contribution docs before opening an issue or pull request. - options: - - label: I have read the contribution docs, and understand what's expected of me. - - - type: textarea - attributes: - label: Link to the page on docs.getdbt.com requiring updates - description: Please link to the page or pages you'd like to see improved. - validations: - required: true - - - type: textarea - attributes: - label: What part(s) of the page would you like to see updated? - description: | - - Give as much detail as you can to help us understand the change you want to see. - - Why should the docs be changed? What use cases does it support? - - What is the expected outcome? - validations: - required: true - - - type: textarea - attributes: - label: Reviewers/Stakeholders/SMEs - description: List the reviewers, stakeholders, and subject matter experts (SMEs) to collaborate with for the docs update. - validations: - required: true - - - type: textarea - attributes: - label: Related Jira tickets - description: Add any other context or screenshots about the feature request here. - validations: - required: false diff --git a/website/blog/2024-10-04-hybrid-mesh.md b/website/blog/2024-10-04-hybrid-mesh.md index 34b2a67d1cb..05a45599318 100644 --- a/website/blog/2024-10-04-hybrid-mesh.md +++ b/website/blog/2024-10-04-hybrid-mesh.md @@ -59,7 +59,7 @@ This allows dbt Cloud to know about the contents and metadata of your project, w - Note: If you have [environment variables](/docs/build/environment-variables) in your project, dbt Cloud environment variables must be prefixed with `DBT_ `(including `DBT_ENV_CUSTOM_ENV_` or `DBT_ENV_SECRET`). Follow the instructions in [this guide](https://docs.getdbt.com/guides/core-to-cloud-1?step=8#environment-variables) to convert them for dbt Cloud. - Each upstream Core project has to have a production [environment](/docs/dbt-cloud-environments) in dbt Cloud. You need to configure credentials and environment variables in dbt Cloud just so that it will resolve relation names to the same places where your dbt Core workflows are deploying those models. - Set up a [merge job](/docs/deploy/merge-jobs) in a production environment to run `dbt parse`. This will enable connecting downstream projects in dbt Mesh by producing the necessary [artifacts](/reference/artifacts/dbt-artifacts) for cross-project referencing. - - Note: Set up a regular job to run `dbt build` instead of using a merge job for `dbt parse`, and centralize your dbt orchestration by moving production runs to dbt Cloud. Check out [this guide](/guides/core-to-cloud-1?step=9) for more details on converting your production runs to dbt Cloud. + - Optional: Set up a regular job to run `dbt build` instead of using a merge job for `dbt parse`, and centralize your dbt orchestration by moving production runs to dbt Cloud. Check out [this guide](/guides/core-to-cloud-1?step=9) for more details on converting your production runs to dbt Cloud. - Optional: Set up a regular job (for example, daily) to run `source freshness` and `docs generate`. This will hydrate dbt Cloud with additional metadata and enable features in [dbt Explorer](/docs/collaborate/explore-projects) that will benefit both teams, including [Column-level lineage](/docs/collaborate/column-level-lineage). ### Step 3: Create and connect your downstream projects to your Core project using dbt Mesh diff --git a/website/docs/best-practices/how-we-structure/2-staging.md b/website/docs/best-practices/how-we-structure/2-staging.md index 8eb91ff5b7b..1f52a4a9a00 100644 --- a/website/docs/best-practices/how-we-structure/2-staging.md +++ b/website/docs/best-practices/how-we-structure/2-staging.md @@ -223,4 +223,4 @@ This is a welcome change for many of us who have become used to applying the sam :::info Development flow versus DAG order. This guide follows the order of the DAG, so we can get a holistic picture of how these three primary layers build on each other towards fueling impactful data products. It’s important to note though that developing models does not typically move linearly through the DAG. Most commonly, we should start by mocking out a design in a spreadsheet so we know we’re aligned with our stakeholders on output goals. Then, we’ll want to write the SQL to generate that output, and identify what tables are involved. Once we have our logic and dependencies, we’ll make sure we’ve staged all the necessary atomic pieces into the project, then bring them together based on the logic we wrote to generate our mart. Finally, with a functioning model flowing in dbt, we can start refactoring and optimizing that mart. By splitting the logic up and moving parts back upstream into intermediate models, we ensure all of our models are clean and readable, the story of our DAG is clear, and we have more surface area to apply thorough testing. -:::info +::: diff --git a/website/docs/docs/build/data-tests.md b/website/docs/docs/build/data-tests.md index afe4719768c..af48e0af267 100644 --- a/website/docs/docs/build/data-tests.md +++ b/website/docs/docs/build/data-tests.md @@ -66,7 +66,9 @@ having total_amount < 0 -The name of this test is the name of the file: `assert_total_payment_amount_is_positive`. +The name of this test is the name of the file: `assert_total_payment_amount_is_positive`. + +Note, you won't need to include semicolons (;) at the end of the SQL statement in your singular test files as it can cause your test to fail. To add a description to a singular test in your project, add a `.yml` file to your `tests` directory, for example, `tests/schema.yml` with the following content: diff --git a/website/docs/docs/build/environment-variables.md b/website/docs/docs/build/environment-variables.md index b87786ac596..99129cea8c9 100644 --- a/website/docs/docs/build/environment-variables.md +++ b/website/docs/docs/build/environment-variables.md @@ -32,7 +32,7 @@ There are four levels of environment variables: To set environment variables at the project and environment level, click **Deploy** in the top left, then select **Environments**. Click **Environments Variables** to add and update your environment variables. - + @@ -62,7 +62,10 @@ Every job runs in a specific, deployment environment, and by default, a job will **Overriding environment variables at the personal level** -You can also set a personal value override for an environment variable when you develop in the dbt-integrated developer environment (IDE). By default, dbt Cloud uses environment variable values set in the project's development environment. To see and override these values, click the gear icon in the top right. Under "Your Profile," click **Credentials** and select your project. Click **Edit** and make any changes in "Environment Variables." +You can also set a personal value override for an environment variable when you develop in the dbt-integrated developer environment (IDE). By default, dbt Cloud uses environment variable values set in the project's development environment. To see and override these values, from dbt Cloud: +- Click on your account name in the left side menu and select **Account settings**. +- Under the **Your profile** section, click **Credentials** and then select your project. +- Scroll to the **Environment variables** section and click **Edit** to make the necessary changes. @@ -115,7 +118,7 @@ The following environment variables are set automatically: - `DBT_ENV` — This key is reserved for the dbt Cloud application and will always resolve to 'prod'. For deployment runs only. - `DBT_CLOUD_ENVIRONMENT_NAME` — The name of the dbt Cloud environment in which `dbt` is running. -- `DBT_CLOUD_ENVIRONMENT_TYPE` — The type of dbt Cloud environment in which `dbt` is running. The valid values are `development` or `deployment`. +- `DBT_CLOUD_ENVIRONMENT_TYPE` — The type of dbt Cloud environment in which `dbt` is running. The valid values are `dev`, `staging`, or `prod`. It can be unset, so use a default like `{{env_var('DBT_CLOUD_ENVIRONMENT_TYPE', '')}}`. #### Run details diff --git a/website/docs/docs/build/incremental-microbatch.md b/website/docs/docs/build/incremental-microbatch.md index fc11d3b3a6b..30070834ff9 100644 --- a/website/docs/docs/build/incremental-microbatch.md +++ b/website/docs/docs/build/incremental-microbatch.md @@ -22,7 +22,7 @@ Refer to [Supported incremental strategies by adapter](/docs/build/incremental-s Incremental models in dbt are a [materialization](/docs/build/materializations) designed to efficiently update your data warehouse tables by only transforming and loading _new or changed data_ since the last run. Instead of reprocessing an entire dataset every time, incremental models process a smaller number of rows, and then append, update, or replace those rows in the existing table. This can significantly reduce the time and resources required for your data transformations. -Microbatch incremental models make it possible to process transformations on very large time-series datasets with efficiency and resiliency. When dbt runs a microbatch model — whether for the first time, during incremental runs, or in specified backfills — it will split the processing into multiple queries (or "batches"), based on the `event_time` and `batch_size` you configure. +Microbatch incremental models make it possible to process transformations on very large time-series datasets with efficiency and resiliency. When dbt runs a microbatch model — whether for the first time, during incremental runs, or in specified backfills — it will split the processing into multiple queries (or "batches"), based on the [`event_time`](/reference/resource-configs/event-time) and `batch_size` you configure. Each "batch" corresponds to a single bounded time period (by default, a single day of data). Where other incremental strategies operate only on "old" and "new" data, microbatch models treat every batch as an atomic unit that can be built or replaced on its own. Each batch is independent and . This is a powerful abstraction that makes it possible for dbt to run batches separately — in the future, concurrently — and to retry them independently. @@ -50,7 +50,7 @@ We run the `sessions` model on October 1, 2024, and then again on October 2. It -The `event_time` for the `sessions` model is set to `session_start`, which marks the beginning of a user’s session on the website. This setting allows dbt to combine multiple page views (each tracked by their own `page_view_start` timestamps) into a single session. This way, `session_start` differentiates the timing of individual page views from the broader timeframe of the entire user session. +The [`event_time`](/reference/resource-configs/event-time) for the `sessions` model is set to `session_start`, which marks the beginning of a user’s session on the website. This setting allows dbt to combine multiple page views (each tracked by their own `page_view_start` timestamps) into a single session. This way, `session_start` differentiates the timing of individual page views from the broader timeframe of the entire user session. @@ -164,7 +164,7 @@ Several configurations are relevant to microbatch models, and some are required: | Config | Type | Description | Default | |----------|------|---------------|---------| -| `event_time` | Column (required) | The column indicating "at what time did the row occur." Required for your microbatch model and any direct parents that should be filtered. | N/A | +| [`event_time`](/reference/resource-configs/event-time) | Column (required) | The column indicating "at what time did the row occur." Required for your microbatch model and any direct parents that should be filtered. | N/A | | `begin` | Date (required) | The "beginning of time" for the microbatch model. This is the starting point for any initial or full-refresh builds. For example, a daily-grain microbatch model run on `2024-10-01` with `begin = '2023-10-01` will process 366 batches (it's a leap year!) plus the batch for "today." | N/A | | `batch_size` | String (required) | The granularity of your batches. Supported values are `hour`, `day`, `month`, and `year` | N/A | | `lookback` | Integer (optional) | Process X batches prior to the latest bookmark to capture late-arriving records. | `1` | diff --git a/website/docs/docs/build/incremental-strategy.md b/website/docs/docs/build/incremental-strategy.md index 3125b6438e0..e0c3bf87783 100644 --- a/website/docs/docs/build/incremental-strategy.md +++ b/website/docs/docs/build/incremental-strategy.md @@ -241,6 +241,12 @@ select * from {{ ref("some_model") }} ### Custom strategies +:::note limited support + +Custom strategies are not currently suppored on the BigQuery and Spark adapters. + +::: + Starting from dbt version 1.2 and onwards, users have an easier alternative to [creating an entirely new materialization](/guides/create-new-materializations). They define and use their own "custom" incremental strategies by: 1. Defining a macro named `get_incremental_STRATEGY_sql`. Note that `STRATEGY` is a placeholder and you should replace it with the name of your custom incremental strategy. diff --git a/website/docs/docs/build/materializations.md b/website/docs/docs/build/materializations.md index 5deb1e7ce92..723acf87414 100644 --- a/website/docs/docs/build/materializations.md +++ b/website/docs/docs/build/materializations.md @@ -111,7 +111,7 @@ When using the `table` materialization, your model is rebuilt as a - To configure snapshots in versions 1.8 and earlier, refer to [Configure snapshots in versions 1.8 and earlier](#configure-snapshots-in-versions-18-and-earlier). These versions use an older syntax where snapshots are defined within a snapshot block in a `.sql` file, typically located in your `snapshots` directory. @@ -60,6 +54,7 @@ Configure your snapshots in YAML files to tell dbt how to detect record changes. snapshots: - name: string relation: relation # source('my_source', 'my_table') or ref('my_model') + [description](/reference/resource-properties/description): markdown_string config: [database](/reference/resource-configs/database): string [schema](/reference/resource-configs/schema): string @@ -70,7 +65,7 @@ snapshots: [updated_at](/reference/resource-configs/updated_at): column_name [invalidate_hard_deletes](/reference/resource-configs/invalidate_hard_deletes): true | false [snapshot_meta_column_names](/reference/resource-configs/snapshot_meta_column_names): dictionary - + [dbt_valid_to_current](/reference/resource-configs/dbt_valid_to_current): string ``` @@ -87,13 +82,14 @@ The following table outlines the configurations available for snapshots: | [check_cols](/reference/resource-configs/check_cols) | If using the `check` strategy, then the columns to check | Only if using the `check` strategy | ["status"] | | [updated_at](/reference/resource-configs/updated_at) | If using the `timestamp` strategy, the timestamp column to compare | Only if using the `timestamp` strategy | updated_at | | [invalidate_hard_deletes](/reference/resource-configs/invalidate_hard_deletes) | Find hard deleted records in source and set `dbt_valid_to` to current time if the record no longer exists | No | True | +| [dbt_valid_to_current](/reference/resource-configs/dbt_valid_to_current) | Set a custom indicator for the value of `dbt_valid_to` in current snapshot records (like a future date). By default, this value is `NULL`. When configured, dbt will use the specified value instead of `NULL` for `dbt_valid_to` for current records in the snapshot table.| No | string | | [snapshot_meta_column_names](/reference/resource-configs/snapshot_meta_column_names) | Customize the names of the snapshot meta fields | No | dictionary | + - In versions prior to v1.9, the `target_schema` (required) and `target_database` (optional) configurations defined a single schema or database to build a snapshot across users and environment. This created problems when testing or developing a snapshot, as there was no clear separation between development and production environments. In v1.9, `target_schema` became optional, allowing snapshots to be environment-aware. By default, without `target_schema` or `target_database` defined, snapshots now use the `generate_schema_name` or `generate_database_name` macros to determine where to build. Developers can still set a custom location with [`schema`](/reference/resource-configs/schema) and [`database`](/reference/resource-configs/database) configs, consistent with other resource types. - A number of other configurations are also supported (for example, `tags` and `post-hook`). For the complete list, refer to [Snapshot configurations](/reference/snapshot-configs). - You can configure snapshots from both the `dbt_project.yml` file and a `config` block. For more information, refer to the [configuration docs](/reference/snapshot-configs). - ### Add a snapshot to your project To add a snapshot to your project follow these steps. For users on versions 1.8 and earlier, refer to [Configure snapshots in versions 1.8 and earlier](#configure-snapshots-in-versions-18-and-earlier). @@ -112,6 +108,7 @@ To add a snapshot to your project follow these steps. For users on versions 1.8 unique_key: id strategy: timestamp updated_at: updated_at + dbt_valid_to_current: "to_date('9999-12-31')" # Specifies that current records should have `dbt_valid_to` set to `'9999-12-31'` instead of `NULL`. ``` @@ -172,6 +169,15 @@ This strategy handles column additions and deletions better than the `check` str + + + +By default, `dbt_valid_to` is `NULL` for current records. However, if you set the [`dbt_valid_to_current` configuration](/reference/resource-configs/dbt_valid_to_current) (available in Versionless and 1.9 and higher), `dbt_valid_to` will be set to your specified value (such as `9999-12-31`) for current records. + +This allows for straightforward date range filtering. + + + The unique key is used by dbt to match rows up, so it's extremely important to make sure this key is actually unique! If you're snapshotting a source, I'd recommend adding a uniqueness test to your source ([example](https://github.com/dbt-labs/jaffle_shop/blob/8e7c853c858018180bef1756ec93e193d9958c5b/models/staging/schema.yml#L26)). @@ -204,13 +210,15 @@ Snapshots can't be rebuilt. Because of this, it's a good idea to put snapshots i ### How snapshots work When you run the [`dbt snapshot` command](/reference/commands/snapshot): -* **On the first run:** dbt will create the initial snapshot table — this will be the result set of your `select` statement, with additional columns including `dbt_valid_from` and `dbt_valid_to`. All records will have a `dbt_valid_to = null`. +* **On the first run:** dbt will create the initial snapshot table — this will be the result set of your `select` statement, with additional columns including `dbt_valid_from` and `dbt_valid_to`. All records will have a `dbt_valid_to = null` or the value specified in [`dbt_valid_to_current`](/reference/resource-configs/dbt_valid_to_current) (available in Versionless and 1.9 and higher) if configured. * **On subsequent runs:** dbt will check which records have changed or if any new records have been created: - - The `dbt_valid_to` column will be updated for any existing records that have changed - - The updated record and any new records will be inserted into the snapshot table. These records will now have `dbt_valid_to = null` - -Note, these column names can be customized to your team or organizational conventions using the [snapshot_meta_column_names](#snapshot-meta-fields) config. + - The `dbt_valid_to` column will be updated for any existing records that have changed. + - The updated record and any new records will be inserted into the snapshot table. These records will now have `dbt_valid_to = null` or the value configured in `dbt_valid_to_current` (available in Versionless and 1.9 and higher). +#### Note +- These column names can be customized to your team or organizational conventions using the [snapshot_meta_column_names](#snapshot-meta-fields) config. +- Use the `dbt_valid_to_current` config to set a custom indicator for the value of `dbt_valid_to` in current snapshot records (like a future date such as `9999-12-31`). By default, this value is `NULL`. When set, dbt will use this specified value instead of `NULL` for `dbt_valid_to` for current records in the snapshot table. + Snapshots can be referenced in downstream models the same way as referencing models — by using the [ref](/reference/dbt-jinja-functions/ref) function. ## Detecting row changes @@ -394,12 +402,14 @@ snapshots: Snapshot tables will be created as a clone of your source dataset, plus some additional meta-fields*. -Starting in 1.9 or with [dbt Cloud Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless), these column names can be customized to your team or organizational conventions via the [`snapshot_meta_column_names`](/reference/resource-configs/snapshot_meta_column_names) config. +Starting in 1.9 or with [dbt Cloud Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless): +- These column names can be customized to your team or organizational conventions using the [`snapshot_meta_column_names`](/reference/resource-configs/snapshot_meta_column_names) config. +- Use the `dbt_valid_to_current` config to set a custom indicator for the value of `dbt_valid_to` in current snapshot records (like a future date such as `9999-12-31`). By default, this value is `NULL`. When set, dbt will use this specified value instead of `NULL` for `dbt_valid_to` for current records in the snapshot table. | Field | Meaning | Usage | | -------------- | ------- | ----- | | dbt_valid_from | The timestamp when this snapshot row was first inserted | This column can be used to order the different "versions" of a record. | -| dbt_valid_to | The timestamp when this row became invalidated. | The most recent snapshot record will have `dbt_valid_to` set to `null`. | +| dbt_valid_to | The timestamp when this row became invalidated.
For current records, this is `NULL` by default or the value specified in `dbt_valid_to_current`. | The most recent snapshot record will have `dbt_valid_to` set to `NULL` or the specified value. | | dbt_scd_id | A unique key generated for each snapshotted record. | This is used internally by dbt | | dbt_updated_at | The updated_at timestamp of the source record when this snapshot row was inserted. | This is used internally by dbt | diff --git a/website/docs/docs/cloud-integrations/semantic-layer/tableau.md b/website/docs/docs/cloud-integrations/semantic-layer/tableau.md index 15a0a92cf39..1f6755c38fa 100644 --- a/website/docs/docs/cloud-integrations/semantic-layer/tableau.md +++ b/website/docs/docs/cloud-integrations/semantic-layer/tableau.md @@ -46,8 +46,8 @@ Alternatively, you can follow these steps to install the Connector: ## Using the integration 1. **Authentication** — Once you authenticate, the system will direct you to the data source page. -2. **Access all Semantic Layer Objects** — Use the "ALL" data source to access all the metrics, dimensions, and entities configured in your dbt Semantic Layer. Note that the "METRICS_AND_DIMENSIONS" data source has been deprecated and replaced by "ALL". -3. **Access saved queries** — You can optionally access individual [saved queries](/docs/build/saved-queries) that you've defined. These will also show up as unique data sources when you log in. +2. **Access all Semantic Layer Objects** — Use the "ALL" data source to access all the metrics, dimensions, and entities configured in your dbt Semantic Layer. Note that the "METRICS_AND_DIMENSIONS" data source has been deprecated and replaced by "ALL". Be sure to use a live connection since extracts are not supported at this time. +3. **Access saved queries** — You can optionally access individual [saved queries](/docs/build/saved-queries) that you've defined. These will also show up as unique data sources when you log in. 4. **Access worksheet** — From your data source selection, go directly to a worksheet in the bottom left-hand corner. 5. **Query metrics and dimensions** — Then, you'll find all the metrics, dimensions, and entities that are available to query on the left side of your window based on your selection. diff --git a/website/docs/docs/cloud/about-cloud-develop-defer.md b/website/docs/docs/cloud/about-cloud-develop-defer.md index fc55edf8a38..ea059ed3e27 100644 --- a/website/docs/docs/cloud/about-cloud-develop-defer.md +++ b/website/docs/docs/cloud/about-cloud-develop-defer.md @@ -13,11 +13,13 @@ Both the dbt Cloud IDE and the dbt Cloud CLI enable users to natively defer to p -By default, dbt follows these rules: +When using `--defer`, dbt Cloud will follow this order of execution for resolving the `{{ ref() }}` functions. -- dbt uses the production locations of parent models to resolve `{{ ref() }}` functions, based on metadata from the production environment. -- If a development version of a deferred model exists, dbt preferentially uses the development database location when resolving the reference. -- Passing the [`--favor-state`](/reference/node-selection/defer#favor-state) flag overrides the default behavior and _always_ resolve refs using production metadata, regardless of the presence of a development relation. +1. If a development version of a deferred relation exists, dbt preferentially uses the development database location when resolving the reference. +2. If a development version doesn't exist, dbt uses the staging locations of parent relations based on metadata from the staging environment. +3. If both a development and staging version doesn't exist, dbt uses the production locations of parent relations based on metadata from the production environment. + +**Note:** Passing the `--favor-state` flag will always resolve refs using production metadata, regardless of the presence of a development relation, skipping step #1. For a clean slate, it's a good practice to drop the development schema at the start and end of your development cycle. diff --git a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md index b396ce62feb..d0ba33e95be 100644 --- a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md +++ b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md @@ -13,6 +13,7 @@ dbt Cloud is [hosted](/docs/cloud/about-cloud/architecture) in multiple regions | Region | Location | Access URL | IP addresses | Developer plan | Team plan | Enterprise plan | |--------|----------|------------|--------------|----------------|-----------|-----------------| | North America [^1] | AWS us-east-1 (N. Virginia) | **Multi-tenant:** cloud.getdbt.com
**Cell based:** ACCOUNT_PREFIX.us1.dbt.com | 52.45.144.63
54.81.134.249
52.22.161.231
52.3.77.232
3.214.191.130
34.233.79.135 | ✅ | ✅ | ✅ | +| North America [^1] | Azure
East US 2 (Virginia) | **Cell based:** ACCOUNT_PREFIX.us2.dbt.com | 20.10.67.192/26 | ❌ | ❌ | ✅ | | EMEA [^1] | AWS eu-central-1 (Frankfurt) | emea.dbt.com | 3.123.45.39
3.126.140.248
3.72.153.148 | ❌ | ❌ | ✅ | | EMEA [^1] | Azure
North Europe (Ireland) | **Cell based:** ACCOUNT_PREFIX.eu2.dbt.com | 20.13.190.192/26 | ❌ | ❌ | ✅ | | APAC [^1] | AWS ap-southeast-2 (Sydney)| au.dbt.com | 52.65.89.235
3.106.40.33
13.239.155.206
| ❌ | ❌ | ✅ | diff --git a/website/docs/docs/cloud/billing.md b/website/docs/docs/cloud/billing.md index ad0834c6c98..2c80648d1f9 100644 --- a/website/docs/docs/cloud/billing.md +++ b/website/docs/docs/cloud/billing.md @@ -149,7 +149,7 @@ dbt Labs may institute use limits if reasonable use is exceeded. Additional feat ## Managing usage -From anywhere in the dbt Cloud account, click the **gear icon** and click **Account settings**. The **Billing** option will be on the left side menu under the **Account Settings** heading. Here, you can view individual available plans and the features provided for each. +From dbt Cloud, click on your account name in the left side menu and select **Account settings**. The **Billing** option will be on the left side menu under the **Settings** heading. Here, you can view individual available plans and the features provided for each. ### Usage notifications diff --git a/website/docs/docs/cloud/configure-cloud-cli.md b/website/docs/docs/cloud/configure-cloud-cli.md index 2e0fc174517..5e0a285c5c5 100644 --- a/website/docs/docs/cloud/configure-cloud-cli.md +++ b/website/docs/docs/cloud/configure-cloud-cli.md @@ -104,9 +104,9 @@ With your repo recloned, you can add, edit, and sync files with your repo. To set environment variables in the dbt Cloud CLI for your dbt project: -1. Select the gear icon on the upper right of the page. -2. Then select **Profile Settings**, then **Credentials**. -3. Click on your project and scroll to the **Environment Variables** section. +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. +2. Under the **Your profile** section, select **Credentials**. +3. Click on your project and scroll to the **Environment variables** section. 4. Click **Edit** on the lower right and then set the user-level environment variables. ## Use the dbt Cloud CLI diff --git a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md index d14435a97e0..abd3c86d4a8 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md @@ -81,7 +81,7 @@ To configure your own linting rules: :::tip Configure dbtonic linting rules -Refer to the [SQLFluff config file](https://github.com/dbt-labs/jaffle-shop-template/blob/main/.sqlfluff) to add the dbt code (or dbtonic) rules we use for our own projects: +Refer to the [Jaffle shop SQLFluff config file](https://github.com/dbt-labs/jaffle-shop-template/blob/main/.sqlfluff) for dbt-specific (or dbtonic) linting rules we use for our own projects:
dbtonic config code example provided by dbt Labs @@ -231,3 +231,4 @@ To avoid this, break up your model into smaller models (files) so that they are - [User interface](/docs/cloud/dbt-cloud-ide/ide-user-interface) - [Keyboard shortcuts](/docs/cloud/dbt-cloud-ide/keyboard-shortcuts) +- [SQL linting in CI jobs](/docs/deploy/continuous-integration#sql-linting) diff --git a/website/docs/docs/cloud/enable-dbt-copilot.md b/website/docs/docs/cloud/enable-dbt-copilot.md index 07a9f6294da..67a11fed3fc 100644 --- a/website/docs/docs/cloud/enable-dbt-copilot.md +++ b/website/docs/docs/cloud/enable-dbt-copilot.md @@ -36,7 +36,7 @@ Note: To disable (only after enabled), repeat steps 1 to 3, toggle off in step 4 ### Bringing your own OpenAI API key (BYOK) -Once AI features have been enabled, you can provide your organization's OpenAI API key. dbt Cloud will then leverage your OpenAI account and terms to power dbt CoPilot. This will incur billing charges to your organization from OpenAI for requests made by dbt CoPilot. +Once AI features have been enabled, you can provide your organization's OpenAI API key. dbt Cloud will then leverage your OpenAI account and terms to power dbt Copilot. This will incur billing charges to your organization from OpenAI for requests made by dbt Copilot. Note that Azure OpenAI is not currently supported, but will be in the future. @@ -48,4 +48,4 @@ A dbt Cloud admin can provide their API key by following these steps: 3. Scroll to **AI** and select the toggle for **OpenAI** -4. Enter your API key and click **Save**. \ No newline at end of file +4. Enter your API key and click **Save**. diff --git a/website/docs/docs/cloud/git/connect-github.md b/website/docs/docs/cloud/git/connect-github.md index e2bf459275e..df5c6cb0728 100644 --- a/website/docs/docs/cloud/git/connect-github.md +++ b/website/docs/docs/cloud/git/connect-github.md @@ -25,19 +25,21 @@ Connecting your GitHub account to dbt Cloud provides convenience and another lay You can connect your dbt Cloud account to GitHub by installing the dbt Cloud application in your GitHub organization and providing access to the appropriate repositories. To connect your dbt Cloud account to your GitHub account: -1. Navigate to **Your Profile** settings by clicking the gear icon in the top right. +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. -2. Select **Linked Accounts** from the left menu. +2. Select **Personal profile** under the **Your profile** section. - +3. Scroll down to **Linked accounts**. -3. In the **Linked Accounts** section, set up your GitHub account connection to dbt Cloud by clicking **Link** to the right of GitHub. This redirects you to your account on GitHub where you will be asked to install and configure the dbt Cloud application. + -4. Select the GitHub organization and repositories dbt Cloud should access. +4. In the **Linked accounts** section, set up your GitHub account connection to dbt Cloud by clicking **Link** to the right of GitHub. This redirects you to your account on GitHub where you will be asked to install and configure the dbt Cloud application. + +5. Select the GitHub organization and repositories dbt Cloud should access. -5. Assign the dbt Cloud GitHub App the following permissions: +6. Assign the dbt Cloud GitHub App the following permissions: - Read access to metadata - Read and write access to Checks - Read and write access to Commit statuses @@ -46,8 +48,8 @@ To connect your dbt Cloud account to your GitHub account: - Read and write access to Webhooks - Read and write access to Workflows -6. Once you grant access to the app, you will be redirected back to dbt Cloud and shown a linked account success state. You are now personally authenticated. -7. Ask your team members to individually authenticate by connecting their [personal GitHub profiles](#authenticate-your-personal-github-account). +7. Once you grant access to the app, you will be redirected back to dbt Cloud and shown a linked account success state. You are now personally authenticated. +8. Ask your team members to individually authenticate by connecting their [personal GitHub profiles](#authenticate-your-personal-github-account). ## Limiting repository access in GitHub If you are your GitHub organization owner, you can also configure the dbt Cloud GitHub application to have access to only select repositories. This configuration must be done in GitHub, but we provide an easy link in dbt Cloud to start this process. @@ -67,14 +69,16 @@ After the dbt Cloud administrator [sets up a connection](/docs/cloud/git/connect To connect a personal GitHub account: -1. Navigate to **Your Profile** settings by clicking the gear icon in the top right. +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. + +2. Select **Personal profile** under the **Your profile** section. -2. Select **Linked Accounts** in the left menu. If your GitHub account is not connected, you’ll see "No connected account". +3. Scroll down to **Linked accounts**. If your GitHub account is not connected, you’ll see "No connected account". -3. Select **Link** to begin the setup process. You’ll be redirected to GitHub, and asked to authorize dbt Cloud in a grant screen. +4. Select **Link** to begin the setup process. You’ll be redirected to GitHub, and asked to authorize dbt Cloud in a grant screen. -4. Once you approve authorization, you will be redirected to dbt Cloud, and you should now see your connected account. +5. Once you approve authorization, you will be redirected to dbt Cloud, and you should now see your connected account. You can now use the dbt Cloud IDE or dbt Cloud CLI. diff --git a/website/docs/docs/cloud/git/connect-gitlab.md b/website/docs/docs/cloud/git/connect-gitlab.md index f68f09ae73d..648a4543932 100644 --- a/website/docs/docs/cloud/git/connect-gitlab.md +++ b/website/docs/docs/cloud/git/connect-gitlab.md @@ -18,11 +18,12 @@ The steps to integrate GitLab in dbt Cloud depend on your plan. If you are on: ## For dbt Cloud Developer and Team tiers To connect your GitLab account: -1. Navigate to Your Profile settings by clicking the gear icon in the top right. -2. Select **Linked Accounts** in the left menu. -3. Click **Link** to the right of your GitLab account. +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. +2. Select **Personal profile** under the **Your profile** section. +3. Scroll down to **Linked accounts**. +4. Click **Link** to the right of your GitLab account. - + When you click **Link**, you will be redirected to GitLab and prompted to sign into your account. GitLab will then ask for your explicit authorization: @@ -99,7 +100,13 @@ Once you've accepted, you should be redirected back to dbt Cloud, and your integ ### Personally authenticating with GitLab dbt Cloud developers on the Enterprise plan must each connect their GitLab profiles to dbt Cloud, as every developer's read / write access for the dbt repo is checked in the dbt Cloud IDE or dbt Cloud CLI. -To connect a personal GitLab account, dbt Cloud developers should navigate to Your Profile settings by clicking the gear icon in the top right, then select **Linked Accounts** in the left menu. +To connect a personal GitLab account: + +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. + +2. Select **Personal profile** under the **Your profile** section. + +3. Scroll down to **Linked accounts**. If your GitLab account is not connected, you’ll see "No connected account". Select **Link** to begin the setup process. You’ll be redirected to GitLab, and asked to authorize dbt Cloud in a grant screen. diff --git a/website/docs/docs/cloud/git/import-a-project-by-git-url.md b/website/docs/docs/cloud/git/import-a-project-by-git-url.md index 90c54dbb1b1..5cd3553b07f 100644 --- a/website/docs/docs/cloud/git/import-a-project-by-git-url.md +++ b/website/docs/docs/cloud/git/import-a-project-by-git-url.md @@ -14,8 +14,8 @@ You must use the `git@...` or `ssh:..`. version of your git URL, not the `https: After importing a project by Git URL, dbt Cloud will generate a Deploy Key for your repository. To find the deploy key in dbt Cloud: -1. Click the gear icon in the upper right-hand corner. -2. Click **Account Settings** --> **Projects** and select a project. +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. +2. Go to **Projects** and select a project. 3. Click the **Repository** link to the repository details page. 4. Copy the key under the **Deploy Key** section. diff --git a/website/docs/docs/cloud/manage-access/audit-log.md b/website/docs/docs/cloud/manage-access/audit-log.md index a7be86a7f99..de52434be06 100644 --- a/website/docs/docs/cloud/manage-access/audit-log.md +++ b/website/docs/docs/cloud/manage-access/audit-log.md @@ -18,7 +18,7 @@ The dbt Cloud audit log stores all the events that occurred in your organization ## Accessing the audit log -To access the audit log, click the gear icon in the top right, then click **Audit Log**. +To access the audit log, click on your account name in the left side menu and select **Account settings**. diff --git a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md index f814d58777a..5628314c922 100644 --- a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md +++ b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md @@ -49,7 +49,7 @@ The following tabs detail steps on how to modify your user license count: If you're on an Enterprise plan and have the correct [permissions](/docs/cloud/manage-access/enterprise-permissions), you can add or remove licenses by adjusting your user seat count. Note, an IT license does not count toward seat usage. -- To remove a user, go to **Account Settings** and select **Users**. +- To remove a user, click on your account name in the left side menu, click **Account settings** and select **Users**. - Select the user you want to remove, click **Edit**, and then **Delete**. - This action cannot be undone. However, you can re-invite the user with the same info if you deleted the user in error.
@@ -64,7 +64,7 @@ If you're on an Enterprise plan and have the correct [permissions](/docs/cloud/m If you're on a Team plan and have the correct [permissions](/docs/cloud/manage-access/self-service-permissions), you can add or remove developers. You'll need to make two changes: -- Adjust your developer user seat count, which manages the users invited to your dbt Cloud project. AND +- Adjust your developer user seat count, which manages the users invited to your dbt Cloud project. - Adjust your developer billing seat count, which manages the number of billable seats. @@ -75,7 +75,7 @@ You can add or remove developers by increasing or decreasing the number of users To add a user in dbt Cloud, you must be an account owner or have admin privileges. -1. From dbt Cloud, click the gear icon at the top right and select **Account Settings**. +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. @@ -95,11 +95,11 @@ Great work! After completing those these steps, your dbt Cloud user count and bi To delete a user in dbt Cloud, you must be an account owner or have admin privileges. If the user has a `developer` license type, this will open up their seat for another user or allow the admins to lower the total number of seats. -1. From dbt Cloud, click the gear icon at the top right and select **Account Settings**. +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. -2. In **Account Settings** and select **Users**. +2. In **Account Settings**, select **Users**. 3. Select the user you want to delete, then click **Edit**. 4. Click **Delete** in the bottom left. Click **Confirm Delete** to immediately delete the user without additional password prompts. This action cannot be undone. However, you can re-invite the user with the same information if the deletion was made in error. diff --git a/website/docs/docs/cloud/manage-access/set-up-databricks-oauth.md b/website/docs/docs/cloud/manage-access/set-up-databricks-oauth.md index e5c42c3fa59..067d51513b7 100644 --- a/website/docs/docs/cloud/manage-access/set-up-databricks-oauth.md +++ b/website/docs/docs/cloud/manage-access/set-up-databricks-oauth.md @@ -45,11 +45,11 @@ You can use the following table to set up the redirect URLs for your application ### Configure the Connection in dbt Cloud (dbt Cloud project admin) Now that you have an OAuth app set up in Databricks, you'll need to add the client ID and secret to dbt Cloud. To do so: - - go to Settings by clicking the gear in the top right. - - on the left, select **Projects** under **Account Settings** - - choose your project from the list - - select **Connection** to edit the connection details - - add the `OAuth Client ID` and `OAuth Client Secret` from the Databricks OAuth app under the **Optional Settings** section + - From dbt Cloud, click on your account name in the left side menu and select **Account settings** + - Select **Projects** from the menu + - Choose your project from the list + - Select **Connection** to edit the connection details + - Add the `OAuth Client ID` and `OAuth Client Secret` from the Databricks OAuth app under the **Optional Settings** section @@ -57,7 +57,8 @@ Now that you have an OAuth app set up in Databricks, you'll need to add the clie Once the Databricks connection via OAuth is set up for a dbt Cloud project, each dbt Cloud user will need to authenticate with Databricks in order to use the IDE. To do so: -- Click the gear icon at the top right and select **Profile settings**. +- From dbt Cloud, click on your account name in the left side menu and select **Account settings** +- Select **Profile settings**. - Select **Credentials**. - Choose your project from the list - Select `OAuth` as the authentication method, and click **Save** diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-microsoft-entra-id.md b/website/docs/docs/cloud/manage-access/set-up-sso-microsoft-entra-id.md index 4658141034c..de935627765 100644 --- a/website/docs/docs/cloud/manage-access/set-up-sso-microsoft-entra-id.md +++ b/website/docs/docs/cloud/manage-access/set-up-sso-microsoft-entra-id.md @@ -120,8 +120,9 @@ To complete setup, follow the steps below in the dbt Cloud application. ### Supplying credentials -25. Click the gear icon at the top right and select **Profile settings**. To the left, select **Single Sign On** under **Account Settings**. -26. Click the **Edit** button and supply the following SSO details: +25. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. +26. Click **Single sign-on** from the menu. +27. Click the **Edit** button and supply the following SSO details: | Field | Value | | ----- | ----- | diff --git a/website/docs/docs/cloud/secure/snowflake-privatelink.md b/website/docs/docs/cloud/secure/snowflake-privatelink.md index b943791292f..dc0cb64ba31 100644 --- a/website/docs/docs/cloud/secure/snowflake-privatelink.md +++ b/website/docs/docs/cloud/secure/snowflake-privatelink.md @@ -97,12 +97,18 @@ Once dbt Cloud support completes the configuration, you can start creating new c 4. Configure the remaining data platform details. 5. Test your connection and save it. -## Enable the connection in Snowflake +### Enable the connection in Snowflake hosted on Azure + +:::note + +AWS private internal stages are not currently supported. + +::: To complete the setup, follow the remaining steps from the Snowflake setup guides. The instructions vary based on the platform: -- [Snowflake AWS PrivateLink](https://docs.snowflake.com/en/user-guide/admin-security-privatelink) - [Snowflake Azure Private Link](https://docs.snowflake.com/en/user-guide/privatelink-azure) +- [Azure private endpoints for internal stages](https://docs.snowflake.com/en/user-guide/private-internal-stages-azure) There are some nuances for each connection and you will need a Snowflake administrator. As the Snowflake administrator, call the `SYSTEM$AUTHORIZE_STAGE_PRIVATELINK_ACCESS` function using the privateEndpointResourceID value as the function argument. This authorizes access to the Snowflake internal stage through the private endpoint. @@ -110,14 +116,12 @@ There are some nuances for each connection and you will need a Snowflake adminis USE ROLE ACCOUNTADMIN; --- AWS PrivateLink -SELECT SYSTEMS$AUTHORIZE_STATE_PRIVATELINK_ACCESS ( `AWS VPC ID` ); - -- Azure Private Link -SELECT SYSTEMS$AUTHORIZE_STATE_PRIVATELINK_ACCESS ( `AZURE PRIVATE ENDPOINT RESOURCE ID` ); +SELECT SYSTEMS$AUTHORIZE_STAGE_PRIVATELINK_ACCESS ( `AZURE PRIVATE ENDPOINT RESOURCE ID` ); ``` + ## Configuring Network Policies If your organization uses [Snowflake Network Policies](https://docs.snowflake.com/en/user-guide/network-policies) to restrict access to your Snowflake account, you will need to add a network rule for dbt Cloud. diff --git a/website/docs/docs/collaborate/build-and-view-your-docs.md b/website/docs/docs/collaborate/build-and-view-your-docs.md index 06716a67674..1a16f034eff 100644 --- a/website/docs/docs/collaborate/build-and-view-your-docs.md +++ b/website/docs/docs/collaborate/build-and-view-your-docs.md @@ -24,7 +24,7 @@ To set up a job to generate docs: 1. In the top left, click **Deploy** and select **Jobs**. 2. Create a new job or select an existing job and click **Settings**. 3. Under **Execution Settings**, select **Generate docs on run** and click **Save**. - + *Note, for dbt Docs users you need to configure the job to generate docs when it runs, then manually link that job to your project. Proceed to [configure project documentation](#configure-project-documentation) so your project generates the documentation when this job runs.* @@ -51,12 +51,11 @@ dbt Docs, available on developer plans or dbt Core users, generates a website fr You configure project documentation to generate documentation when the job you set up in the previous section runs. In the project settings, specify the job that generates documentation artifacts for that project. Once you configure this setting, subsequent runs of the job will automatically include a step to generate documentation. -1. Click the gear icon in the top right. -2. Select **Account Settings**. -3. Navigate to **Projects** and select the project that needs documentation. -4. Click **Edit**. -5. Under **Artifacts**, select the job that should generate docs when it runs and click **Save**. - +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. +2. Navigate to **Projects** and select the project that needs documentation. +3. Click **Edit**. +4. Under **Artifacts**, select the job that should generate docs when it runs and click **Save**. + :::tip Use dbt Explorer for a richer documentation experience For a richer and more interactive experience, try out [dbt Explorer](/docs/collaborate/explore-projects), available on [Team or Enterprise plans](https://www.getdbt.com/pricing/). It includes map layers of your DAG, keyword search, interacts with the IDE, model performance, project recommendations, and more. diff --git a/website/docs/docs/collaborate/model-query-history.md b/website/docs/docs/collaborate/model-query-history.md index 0180757f980..3e85883a86e 100644 --- a/website/docs/docs/collaborate/model-query-history.md +++ b/website/docs/docs/collaborate/model-query-history.md @@ -28,7 +28,7 @@ To access the features, you should meet the following: 1. You have a dbt Cloud account on the [Enterprise plan](https://www.getdbt.com/pricing/). Single-tenant accounts should contact their account representative for setup. 2. You have set up a [production](https://docs.getdbt.com/docs/deploy/deploy-environments#set-as-production-environment) deployment environment for each project you want to explore, with at least one successful job run. 3. You have [admin permissions](/docs/cloud/manage-access/enterprise-permissions) in dbt Cloud to edit project settings or production environment settings. -4. Use Snowflake or BigQuery as your data warehouse and can enable query history permissions or work with an admin to do so. Support for additional data platforms coming soon. +4. Use Snowflake (Enterprise tier or higher) or BigQuery as your data warehouse and can enable query history permissions or work with an admin to do so. Support for additional data platforms coming soon. ## Enable query history in dbt Cloud diff --git a/website/docs/docs/dbt-cloud-apis/service-tokens.md b/website/docs/docs/dbt-cloud-apis/service-tokens.md index a077b230c28..d9ae52dbc2d 100644 --- a/website/docs/docs/dbt-cloud-apis/service-tokens.md +++ b/website/docs/docs/dbt-cloud-apis/service-tokens.md @@ -25,7 +25,7 @@ You can assign as many permission sets as needed to one token. For more on permi You can generate service tokens if you have a Developer [license](/docs/cloud/manage-access/seats-and-users) and account admin [permissions](/docs/cloud/manage-access/about-user-access#permission-sets). To create a service token in dbt Cloud, follow these steps: -1. Open the **Account Settings** page by clicking the gear icon on the right-hand side. +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. 2. On the left sidebar, click on **Service Tokens**. 3. Click the **+ New Token** button to generate a new token. 4. Once the token is generated, you won't be able to view this token again so make sure to save it somewhere safe. diff --git a/website/docs/docs/dbt-versions/experimental-features.md b/website/docs/docs/dbt-versions/experimental-features.md index a621bd4ac44..cc5bf3ff748 100644 --- a/website/docs/docs/dbt-versions/experimental-features.md +++ b/website/docs/docs/dbt-versions/experimental-features.md @@ -18,7 +18,8 @@ You can access experimental features to preview beta features that haven’t yet To enable or disable experimental features: -1. Navigate to **Profile settings** by clicking the gear icon in the top right. +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings** +2. Go to **Personal profile** under the **Your profile** header. 2. Find Experimental features at the bottom of Your Profile page. 3. Click **Beta** to toggle the features on or off as shown in the following image. ![Experimental features](/img/docs/dbt-versions/experimental-feats.png) diff --git a/website/docs/docs/dbt-versions/release-notes.md b/website/docs/docs/dbt-versions/release-notes.md index c7fc3ee16d8..536c45ea045 100644 --- a/website/docs/docs/dbt-versions/release-notes.md +++ b/website/docs/docs/dbt-versions/release-notes.md @@ -20,6 +20,10 @@ Release notes are grouped by month for both multi-tenant and virtual private clo ## November 2024 - **Behavior change**: If you use a custom microbatch macro, set a [`require_batched_execution_for_custom_microbatch_strategy` behavior flag](/reference/global-configs/behavior-changes#custom-microbatch-strategy) in your `dbt_project.yml` to enable batched execution. If you don't have a custom microbatch macro, you don't need to set this flag as dbt will handle microbatching automatically for any model using the [microbatch strategy](/docs/build/incremental-microbatch#how-microbatch-compares-to-other-incremental-strategies). +- **Enhancement**: For users that have Advanced CI's [compare changes](/docs/deploy/advanced-ci#compare-changes) feature enabled, you can optimize performance when running comparisons by using custom dbt syntax to customize deferral usage, exclude specific large models (or groups of models with tags), and more. Refer to [Compare changes custom commands](/docs/deploy/job-commands#compare-changes-custom-commands) for examples of how to customize the comparison command. +- **New**: SQL linting in CI jobs is now generally available in dbt Cloud. You can enable SQL linting in your CI jobs, using [SQLFluff](https://sqlfluff.com/), to automatically lint all SQL files in your project as a run step before your CI job builds. SQLFluff linting is available on [dbt Cloud Versionless](/docs/dbt-versions/versionless-cloud) and to dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) accounts. Refer to [SQL linting](/docs/deploy/continuous-integration#sql-linting) for more information. +- **New**: Use the [`dbt_valid_to_current`](/reference/resource-configs/dbt_valid_to_current) config to set a custom indicator for the value of `dbt_valid_to` in current snapshot records (like a future date). By default, this value is `NULL`. When configured, dbt will use the specified value instead of `NULL` for `dbt_valid_to` for current records in the snapshot table. This feature is available in dbt Cloud Versionless and dbt Core v1.9 and later. +- **New**: Use the [`event_time`](/reference/resource-configs/event-time) configuration to specify "at what time did the row occur." This configuration is required for [Incremental microbatch](/docs/build/incremental-microbatch) and can be added to ensure you're comparing overlapping times in [Advanced CI's compare changes](/docs/deploy/advanced-ci). Available in dbt Cloud Versionless and dbt Core v1.9 and higher. - **Fix**: This update improves [dbt Semantic Layer Tableau integration](/docs/cloud-integrations/semantic-layer/tableau) making query parsing more reliable. Some key fixes include: - Error messages for unsupported joins between saved queries and ALL tables. - Improved handling of queries when multiple tables are selected in a data source. @@ -28,6 +32,7 @@ Release notes are grouped by month for both multi-tenant and virtual private clo - **Enhancement**: The dbt Semantic Layer supports creating new credentials for users who don't have permissions to create service tokens. In the **Credentials & service tokens** side panel, the **+Add Service Token** option is unavailable for those users who don't have permission. Instead, the side panel displays a message indicating that the user doesn't have permission to create a service token and should contact their administration. Refer to [Set up dbt Semantic Layer](/docs/use-dbt-semantic-layer/setup-sl) for more details. ## October 2024 + Documentation for new features and functionality announced at Coalesce 2024: @@ -51,7 +56,7 @@ Release notes are grouped by month for both multi-tenant and virtual private clo - [Python SDK](https://docs.getdbt.com/docs/dbt-cloud-apis/sl-python) is now generally available - + - **Behavior change:** [Multi-factor authentication](/docs/cloud/manage-access/mfa) is now enforced on all users who log in with username and password credentials. - **Enhancement**: The dbt Semantic Layer JDBC now allows users to paginate `semantic_layer.metrics()` and `semantic_layer.dimensions()` for metrics and dimensions using `page_size` and `page_number` parameters. Refer to [Paginate metadata calls](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata) for more information. - **Enhancement**: The dbt Semantic Layer JDBC now allows you to filter your metrics to include only those that contain a specific substring, using the `search` parameter. If no substring is provided, the query returns all metrics. Refer to [Fetch metrics by substring search](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata) for more information. diff --git a/website/docs/docs/deploy/advanced-ci.md b/website/docs/docs/deploy/advanced-ci.md index 8d4d6da8897..9081ce0c16c 100644 --- a/website/docs/docs/deploy/advanced-ci.md +++ b/website/docs/docs/deploy/advanced-ci.md @@ -3,6 +3,7 @@ title: "Advanced CI" id: "advanced-ci" sidebar_label: "Advanced CI" description: "Advanced CI enables developers to compare changes by demonstrating the changes the code produces." +image: /img/docs/dbt-cloud/example-ci-compare-changes-tab.png --- # Advanced CI @@ -20,9 +21,14 @@ dbt Labs plans to provide additional Advanced CI features in the near future. Mo ::: +## Prerequisites +- You have a dbt Cloud Enterprise account. +- You have [Advance CI features](/docs/cloud/account-settings#account-access-to-advanced-features) enabled. +- You use a supported data platform: BigQuery, Databricks, Postgres, or Snowflake. Support for additional data platforms coming soon. + ## Compare changes feature {#compare-changes} -For [CI jobs](/docs/deploy/ci-jobs) that have the **dbt compare** option enabled, dbt Cloud compares the changes between the last applied state of the production environment (defaulting to deferral for lower compute costs) and the latest changes from the pull request, whenever a pull request is opened or new commits are pushed. +For [CI jobs](/docs/deploy/ci-jobs) that have the [**dbt compare** option enabled](/docs/deploy/ci-jobs#set-up-ci-jobs), dbt Cloud compares the changes between the last applied state of the production environment (defaulting to deferral for lower compute costs) and the latest changes from the pull request, whenever a pull request is opened or new commits are pushed. dbt reports the comparison differences in: @@ -31,6 +37,16 @@ dbt reports the comparison differences in: +### Optimizing comparisons + +When an [`event_time`](/reference/resource-configs/event-time) column is specified on your model, compare changes can optimize comparisons by using only the overlapping timeframe (meaning the timeframe exists in both the CI and production environment), helping you avoid incorrect row-count changes and return results faster. + +This is useful in scenarios like: +- **Subset of data in CI** — When CI builds only a [subset of data](/best-practices/best-practice-workflows#limit-the-data-processed-when-in-development) (like the most recent 7 days), compare changes would interpret the excluded data as "deleted rows." Configuring `event_time` allows you to avoid this issue by limiting comparisons to the overlapping timeframe, preventing false alerts about data deletions that are just filtered out in CI. +- **Fresher data in CI than in production** — When your CI job includes fresher data than production (because it has run more recently), compare changes would flag the additional rows as "new" data, even though they’re just fresher data in CI. With `event_time` configured, the comparison only includes the shared timeframe and correctly reflects actual changes in the data. + + + ## About the cached data After [comparing changes](#compare-changes), dbt Cloud stores a cache of no more than 100 records for each modified model for preview purposes. By caching this data, you can view the examples of changed data without rerunning the comparison against the data warehouse every time (optimizing for lower compute costs). To display the changes, dbt Cloud uses a cached version of a sample of the data records. These data records are queried from the database using the connection configuration (such as user, role, service account, and so on) that's set in the CI job's environment. diff --git a/website/docs/docs/deploy/artifacts.md b/website/docs/docs/deploy/artifacts.md index cff36bfafba..cb8d3e85da0 100644 --- a/website/docs/docs/deploy/artifacts.md +++ b/website/docs/docs/deploy/artifacts.md @@ -18,7 +18,11 @@ To view a resource, its metadata, and what commands are needed, refer to [genera The following steps are for legacy dbt Docs only. For the current documentation experience, see [dbt Explorer](/docs/collaborate/explore-projects). -While running any job can produce artifacts, you should only associate one production job with a given project to produce the project's artifacts. You can designate this connection on the **Project details** page. To access this page, click the gear icon in the upper right, select **Account Settings**, select your project, and click **Edit** in the lower right. Under **Artifacts**, select the jobs you want to produce documentation and source freshness artifacts for. +While running any job can produce artifacts, you should only associate one production job with a given project to produce the project's artifacts. You can designate this connection on the **Project details** page. To access this page: + +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. +2. Select your project, and click **Edit** in the lower right. +3. Under **Artifacts**, select the jobs you want to produce documentation and source freshness artifacts for. diff --git a/website/docs/docs/deploy/ci-jobs.md b/website/docs/docs/deploy/ci-jobs.md index 7ab7f65796d..3da04ff6948 100644 --- a/website/docs/docs/deploy/ci-jobs.md +++ b/website/docs/docs/deploy/ci-jobs.md @@ -6,20 +6,19 @@ description: "Learn how to create and set up CI checks to test code changes befo You can set up [continuous integration](/docs/deploy/continuous-integration) (CI) jobs to run when someone opens a new pull request (PR) in your dbt Git repository. By running and testing only _modified_ models, dbt Cloud ensures these jobs are as efficient and resource conscientious as possible on your data platform. -## Set up CI jobs {#set-up-ci-jobs} - -dbt Labs recommends that you create your CI job in a dedicated dbt Cloud [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) that's connected to a staging database. Having a separate environment dedicated for CI will provide better isolation between your temporary CI schema builds and your production data builds. Additionally, sometimes teams need their CI jobs to be triggered when a PR is made to a branch other than main. If your team maintains a staging branch as part of your release process, having a separate environment will allow you to set a [custom branch](/faqs/Environments/custom-branch-settings) and, accordingly, the CI job in that dedicated environment will be triggered only when PRs are made to the specified custom branch. To learn more, refer to [Get started with CI tests](/guides/set-up-ci). - -### Prerequisites +## Prerequisites - You have a dbt Cloud account. - CI features: - For both the [concurrent CI checks](/docs/deploy/continuous-integration#concurrent-ci-checks) and [smart cancellation of stale builds](/docs/deploy/continuous-integration#smart-cancellation) features, your dbt Cloud account must be on the [Team or Enterprise plan](https://www.getdbt.com/pricing/). - - The [SQL linting](/docs/deploy/continuous-integration#sql-linting) feature is currently available in [beta](/docs/dbt-versions/product-lifecycles#dbt-cloud) to a limited group of users and is gradually being rolled out. If you're in the beta, the **Linting** option is available for use. + - [SQL linting](/docs/deploy/continuous-integration#sql-linting) is available on [dbt Cloud Versionless](/docs/dbt-versions/versionless-cloud) and to dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) accounts. You should have [SQLFluff configured](/docs/deploy/continuous-integration#to-configure-sqlfluff-linting) in your project. - [Advanced CI](/docs/deploy/advanced-ci) features: - For the [compare changes](/docs/deploy/advanced-ci#compare-changes) feature, your dbt Cloud account must be on the [Enterprise plan](https://www.getdbt.com/pricing/) and have enabled Advanced CI features. Please ask your [dbt Cloud administrator to enable](/docs/cloud/account-settings#account-access-to-advanced-ci-features) this feature for you. After enablement, the **dbt compare** option becomes available in the CI job settings. - Set up a [connection with your Git provider](/docs/cloud/git/git-configuration-in-dbt-cloud). This integration lets dbt Cloud run jobs on your behalf for job triggering. - If you're using a native [GitLab](/docs/cloud/git/connect-gitlab) integration, you need a paid or self-hosted account that includes support for GitLab webhooks and [project access tokens](https://docs.gitlab.com/ee/user/project/settings/project_access_tokens.html). If you're using GitLab Free, merge requests will trigger CI jobs but CI job status updates (success or failure of the job) will not be reported back to GitLab. +## Set up CI jobs {#set-up-ci-jobs} + +dbt Labs recommends that you create your CI job in a dedicated dbt Cloud [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) that's connected to a staging database. Having a separate environment dedicated for CI will provide better isolation between your temporary CI schema builds and your production data builds. Additionally, sometimes teams need their CI jobs to be triggered when a PR is made to a branch other than main. If your team maintains a staging branch as part of your release process, having a separate environment will allow you to set a [custom branch](/faqs/Environments/custom-branch-settings) and, accordingly, the CI job in that dedicated environment will be triggered only when PRs are made to the specified custom branch. To learn more, refer to [Get started with CI tests](/guides/set-up-ci). To make CI job creation easier, many options on the **CI job** page are set to default values that dbt Labs recommends that you use. If you don't want to use the defaults, you can change them. @@ -36,10 +35,17 @@ To make CI job creation easier, many options on the **CI job** page are set to d 4. Options in the **Execution settings** section: - **Commands** — By default, this includes the `dbt build --select state:modified+` command. This informs dbt Cloud to build only new or changed models and their downstream dependents. Importantly, state comparison can only happen when there is a deferred environment selected to compare state to. Click **Add command** to add more [commands](/docs/deploy/job-commands) that you want to be invoked when this job runs. - - **Linting** — Enable this option for dbt to [lint the SQL files](/docs/deploy/continuous-integration#sql-linting) in your project as the first step in `dbt run`. If this check runs into an error, dbt can either **Stop running on error** or **Continue running on error**. + - **Linting** — Enable this option for dbt to [lint the SQL files](/docs/deploy/continuous-integration#sql-linting) in your project as the first step in `dbt run`. If this check runs into an error, dbt can either **Stop running on error** or **Continue running on error**. - **dbt compare** — Enable this option to compare the last applied state of the production environment (if one exists) with the latest changes from the pull request, and identify what those differences are. To enable record-level comparison and primary key analysis, you must add a [primary key constraint](/reference/resource-properties/constraints) or [uniqueness test](/reference/resource-properties/data-tests#unique). Otherwise, you'll receive a "Primary key missing" error message in dbt Cloud. To review the comparison report, navigate to the [Compare tab](/docs/deploy/run-visibility#compare-tab) in the job run's details. A summary of the report is also available from the pull request in your Git provider (see the [CI report example](#example-ci-report)). + + :::info Optimization tip + When you enable the **dbt compare** checkbox, you can customize the comparison command to optimize your CI job. For example, if you have large models that take a long time to compare, you can exclude them to speed up the process using the [`--exclude` flag](/reference/node-selection/exclude). Refer to [compare changes custom commands](/docs/deploy/job-commands#compare-changes-custom-commands) for more details. + + Additionally, if you set [`event_time`](/reference/resource-configs/event-time) in your models/seeds/snapshots/sources, it allows you to compare matching date ranges between tables by filtering to overlapping date ranges. This is useful for faster CI workflow or custom sampling set ups. + ::: + - **Compare changes against an environment (Deferral)** — By default, it’s set to the **Production** environment if you created one. This option allows dbt Cloud to check the state of the code in the PR against the code running in the deferred environment, so as to only check the modified code, instead of building the full table or the entire DAG. :::info diff --git a/website/docs/docs/deploy/continuous-integration.md b/website/docs/docs/deploy/continuous-integration.md index 4e152b0a97e..38ce34678ce 100644 --- a/website/docs/docs/deploy/continuous-integration.md +++ b/website/docs/docs/deploy/continuous-integration.md @@ -5,7 +5,7 @@ description: "You can set up continuous integration (CI) checks to test every si pagination_next: "docs/deploy/advanced-ci" --- -To implement a continuous integration (CI) workflow in dbt Cloud, you can set up automation that tests code changes by running [CI jobs](/docs/deploy/ci-jobs) before merging to production. dbt Cloud tracks the state of what’s running in your production environment so, when you run a CI job, only the modified data assets in your pull request (PR) and their downstream dependencies are built and tested in a staging schema. You can also view the status of the CI checks (tests) directly from within the PR; this information is posted to your Git provider as soon as a CI job completes. Additionally, you can enable settings in your Git provider that allow PRs only with successful CI checks be approved for merging. +To implement a continuous integration (CI) workflow in dbt Cloud, you can set up automation that tests code changes by running [CI jobs](/docs/deploy/ci-jobs) before merging to production. dbt Cloud tracks the state of what’s running in your production environment so, when you run a CI job, only the modified data assets in your pull request (PR) and their downstream dependencies are built and tested in a staging schema. You can also view the status of the CI checks (tests) directly from within the PR; this information is posted to your Git provider as soon as a CI job completes. Additionally, you can enable settings in your Git provider that allow PRs only with successful CI checks to be approved for merging. @@ -13,11 +13,11 @@ Using CI helps: - Provide increased confidence and assurances that project changes will work as expected in production. - Reduce the time it takes to push code changes to production, through build and test automation, leading to better business outcomes. -- Allow organizations to make code changes in a standardized and governed way that ensure code quality without sacrificing speed. +- Allow organizations to make code changes in a standardized and governed way that ensures code quality without sacrificing speed. ## How CI works -When you [set up CI jobs](/docs/deploy/ci-jobs#set-up-ci-jobs), dbt Cloud listens for notification from your Git provider indicating that a new PR has been opened or updated with new commits. When dbt Cloud receives one of these notifications, it enqueues a new run of the CI job. +When you [set up CI jobs](/docs/deploy/ci-jobs#set-up-ci-jobs), dbt Cloud listens for a notification from your Git provider indicating that a new PR has been opened or updated with new commits. When dbt Cloud receives one of these notifications, it enqueues a new run of the CI job. dbt Cloud builds and tests models, semantic models, metrics, and saved queries affected by the code change in a temporary schema, unique to the PR. This process ensures that the code builds without error and that it matches the expectations as defined by the project's dbt tests. The unique schema name follows the naming convention `dbt_cloud_pr__` (for example, `dbt_cloud_pr_1862_1704`) and can be found in the run details for the given run, as shown in the following image: @@ -31,16 +31,16 @@ dbt Cloud deletes the temporary schema from your  w The [dbt Cloud scheduler](/docs/deploy/job-scheduler) executes CI jobs differently from other deployment jobs in these important ways: -- **Concurrent CI checks** — CI runs triggered by the same dbt Cloud CI job execute concurrently (in parallel), when appropriate. -- **Smart cancellation of stale builds** — Automatically cancels stale, in-flight CI runs when there are new commits to the PR. -- **Run slot treatment** — CI runs don't consume a run slot. -- **SQL linting** — When enabled, automatically lints all SQL files in your project as a run step before your CI job builds. +- [**Concurrent CI checks**](#concurrent-ci-checks) — CI runs triggered by the same dbt Cloud CI job execute concurrently (in parallel), when appropriate. +- [**Smart cancellation of stale builds**](#smart-cancellation-of-stale-builds) — Automatically cancels stale, in-flight CI runs when there are new commits to the PR. +- [**Run slot treatment**](#run-slot-treatment) — CI runs don't consume a run slot. +- [**SQL linting**](#sql-linting) — When enabled, automatically lints all SQL files in your project as a run step before your CI job builds. ### Concurrent CI checks When you have teammates collaborating on the same dbt project creating pull requests on the same dbt repository, the same CI job will get triggered. Since each run builds into a dedicated, temporary schema that’s tied to the pull request, dbt Cloud can safely execute CI runs _concurrently_ instead of _sequentially_ (differing from what is done with deployment dbt Cloud jobs). Because no one needs to wait for one CI run to finish before another one can start, with concurrent CI checks, your whole team can test and integrate dbt code faster. -Below describes the conditions when CI checks are run concurrently and when they’re not: +The following describes the conditions when CI checks are run concurrently and when they’re not: - CI runs with different PR numbers execute concurrently. - CI runs with the _same_ PR number and _different_ commit SHAs execute serially because they’re building into the same schema. dbt Cloud will run the latest commit and cancel any older, stale commits. For details, refer to [Smart cancellation of stale builds](#smart-cancellation). @@ -56,10 +56,19 @@ When you push a new commit to a PR, dbt Cloud enqueues a new CI run for the late CI runs don't consume run slots. This guarantees a CI check will never block a production run. -### SQL linting +### SQL linting -When enabled for your CI job, dbt invokes [SQLFluff](https://sqlfluff.com/) which is a modular and configurable SQL linter that warns you of complex functions, syntax, formatting, and compilation errors. By default, it lints all the changed SQL files in your project (compared to the last deferred production state). +Available for [dbt Cloud Versionless](/docs/dbt-versions/versionless-cloud) and dbt Cloud Team or Enterprise accounts. -If the linter runs into errors, you can specify whether dbt should stop running the job on error or continue running it on error. When failing jobs, it helps reduce compute costs by avoiding builds for pull requests that don't meet your SQL code quality CI check. +When [enabled for your CI job](/docs/deploy/ci-jobs#set-up-ci-jobs), dbt invokes [SQLFluff](https://sqlfluff.com/) which is a modular and configurable SQL linter that warns you of complex functions, syntax, formatting, and compilation errors. By default, it lints all the changed SQL files in your project (compared to the last deferred production state). -You can use [SQLFluff Configuration Files](https://docs.sqlfluff.com/en/stable/configuration/setting_configuration.html#configuration-files) to override the default linting behavior in dbt. Create an `.sqlfluff` configuration file in your project, add your linting rules to it, and dbt Cloud will use them when linting. For complete details, refer to [Custom Usage](https://docs.sqlfluff.com/en/stable/gettingstarted.html#custom-usage) in the SQLFluff documentation. +If the linter runs into errors, you can specify whether dbt should stop running the job on error or continue running it on error. When failing jobs, it helps reduce compute costs by avoiding builds for pull requests that don't meet your SQL code quality CI check. + +#### To configure SQLFluff linting: +You can optionally configure SQLFluff linting rules to override default linting behavior. + +- Use [SQLFluff Configuration Files](https://docs.sqlfluff.com/en/stable/configuration/setting_configuration.html#configuration-files) to override the default linting behavior in dbt. +- Create a `.sqlfluff` configuration file in your project, add your linting rules to it, and dbt Cloud will use them when linting. + - When configuring, you can use `dbt` as the templater (for example, `templater = dbt`) + - If you’re using the dbt Cloud IDE, dbt Cloud CLI, or any other editor, refer to [Customize linting](/docs/cloud/dbt-cloud-ide/lint-format#customize-linting) for guidance on how to add the dbt-specific (or dbtonic) linting rules we use for own project. +- For complete details, refer to [Custom Usage](https://docs.sqlfluff.com/en/stable/gettingstarted.html#custom-usage) in the SQLFluff documentation. diff --git a/website/docs/docs/deploy/job-commands.md b/website/docs/docs/deploy/job-commands.md index 09517262e93..29c98f1916c 100644 --- a/website/docs/docs/deploy/job-commands.md +++ b/website/docs/docs/deploy/job-commands.md @@ -42,14 +42,36 @@ For every job, you have the option to select the [Generate docs on run](/docs/co You can add or remove as many dbt commands as necessary for every job. However, you need to have at least one dbt command. There are few commands listed as "dbt Cloud CLI" or "dbt Core" in the [dbt Command reference page](/reference/dbt-commands) page. This means they are meant for use in dbt Core or dbt Cloud CLI, and not in dbt Cloud IDE. - :::tip Using selectors -Use [selectors](/reference/node-selection/syntax) as a powerful way to select and execute portions of your project in a job run. For example, to run tests for one_specific_model, use the selector: `dbt test --select one_specific_model`. The job will still run if a selector doesn't match any models. +Use [selectors](/reference/node-selection/syntax) as a powerful way to select and execute portions of your project in a job run. For example, to run tests for `one_specific_model`, use the selector: `dbt test --select one_specific_model`. The job will still run if a selector doesn't match any models. ::: -**Job outcome** — During a job run, the commands are "chained" together and executed as run steps. If one of the run steps in the chain fails, then the subsequent steps aren't executed, and the job will fail. +#### Compare changes custom commands +For users that have Advanced CI's [compare changes](/docs/deploy/advanced-ci#compare-changes) feature enabled and selected the **dbt compare** checkbox, you can add custom dbt commands to optimize running the comparison (for example, to exclude specific large models, or groups of models with tags). Running comparisons on large models can significantly increase the time it takes for CI jobs to complete. + + + +The following examples highlight how you can customize the dbt compare command box: + +- Exclude the large `fct_orders` model from the comparison to run a CI job on fewer or smaller models and reduce job time/resource consumption. Use the following command: + + ```sql + --select state:modified --exclude fct_orders + ``` +- Exclude models based on tags for scenarios like when models share a common feature or function. Use the following command: + + ```sql + --select state modified --exclude tag:tagname_a tag:tagname_b + ``` +- Include models that were directly modified and also those one step downstream using the `modified+1` selector. Use the following command: + ```sql + --select state:modified+1 + ``` + +#### Job outcome +During a job run, the commands are "chained" together and executed as run steps. If one of the run steps in the chain fails, then the subsequent steps aren't executed, and the job will fail. In the following example image, the first four run steps are successful. However, if the fifth run step (`dbt run --select state:modified+ --full-refresh --fail-fast`) fails, then the next run steps aren't executed, and the entire job fails. The failed job returns a non-zero [exit code](/reference/exit-codes) and "Error" job status: diff --git a/website/docs/docs/deploy/job-notifications.md b/website/docs/docs/deploy/job-notifications.md index d898bc813e0..fb4b5f557e6 100644 --- a/website/docs/docs/deploy/job-notifications.md +++ b/website/docs/docs/deploy/job-notifications.md @@ -52,6 +52,8 @@ You can receive email alerts about jobs by configuring the dbt Cloud email notif You can receive Slack alerts about jobs by setting up the Slack integration and then configuring the dbt Cloud Slack notification settings. dbt Cloud integrates with Slack via OAuth to ensure secure authentication. :::note +Virtual Private Cloud (VPC) admins must [contact support](mailto:support@getdbt.com) to complete the Slack integration. + If there has been a change in user roles or Slack permissions where you no longer have access to edit a configured Slack channel, please [contact support](mailto:support@getdbt.com) for assistance. ::: diff --git a/website/docs/faqs/Accounts/delete-users.md b/website/docs/faqs/Accounts/delete-users.md index a7e422fd82c..1efbb018242 100644 --- a/website/docs/faqs/Accounts/delete-users.md +++ b/website/docs/faqs/Accounts/delete-users.md @@ -8,15 +8,15 @@ id: delete-users To delete a user in dbt Cloud, you must be an account owner or have admin privileges. If the user has a `developer` license type, this will open up their seat for another user or allow the admins to lower the total number of seats. -1. From dbt Cloud, click the gear icon at the top right and select **Account Settings**. +1. From dbt Cloud, click on your account name in the left side menu and, select **Account settings**. - + 2. In **Account Settings**, select **Users** under **Teams**. 3. Select the user you want to delete, then click **Edit**. 4. Click **Delete** in the bottom left. Click **Confirm Delete** to immediately delete the user without additional password prompts. This action cannot be undone. However, you can re-invite the user with the same information if the deletion was made in error. - + If you are on a **Teams** plan and you are deleting users to reduce the number of billable seats, you also need to take these steps to lower the license count: 1. In **Account Settings**, select **Billing**. diff --git a/website/docs/faqs/Environments/custom-branch-settings.md b/website/docs/faqs/Environments/custom-branch-settings.md index 70052488ac6..6e998b267d8 100644 --- a/website/docs/faqs/Environments/custom-branch-settings.md +++ b/website/docs/faqs/Environments/custom-branch-settings.md @@ -27,7 +27,7 @@ For example, if you want to use the `develop` branch of a connected repository: - Enter **develop** as the name of your custom branch - Click **Save** - + ## Deployment diff --git a/website/docs/faqs/Environments/delete-environment-job.md b/website/docs/faqs/Environments/delete-environment-job.md index eb9ac511a7c..5b167b6df13 100644 --- a/website/docs/faqs/Environments/delete-environment-job.md +++ b/website/docs/faqs/Environments/delete-environment-job.md @@ -18,7 +18,7 @@ To delete a job or multiple jobs in dbt Cloud: 4. Scroll to the bottom of the page and click **Delete** to delete the job.
- +
Delete a job
@@ -35,10 +35,7 @@ Deleting an environment automatically deletes its associated job(s). If you want 3. Click **Settings** on the top right of the page and then click **Edit**. 4. Scroll to the bottom of the page and click **Delete** to delete the environment.
-
- -
Delete an environment
-
+ 5. Confirm your action in the **Confirm Delete** pop-up by clicking **Confirm Delete** in the bottom right to delete the environment immediately. This action cannot be undone. However, you can create a new environment with the same information if the deletion was made in error.

diff --git a/website/docs/faqs/Git/git-migration.md b/website/docs/faqs/Git/git-migration.md index 156227d59ae..7d7a503c16a 100644 --- a/website/docs/faqs/Git/git-migration.md +++ b/website/docs/faqs/Git/git-migration.md @@ -16,7 +16,7 @@ To migrate from one git provider to another, refer to the following steps to avo 2. Go back to dbt Cloud and set up your [integration for the new git provider](/docs/cloud/git/connect-github), if needed. 3. Disconnect the old repository in dbt Cloud by going to **Account Settings** and then **Projects**. Click on the **Repository** link, then click **Edit** and **Disconnect**. - + 4. On the same page, connect to the new git provider repository by clicking **Configure Repository** - If you're using the native integration, you may need to OAuth to it. diff --git a/website/docs/faqs/Git/github-permissions.md b/website/docs/faqs/Git/github-permissions.md index 075343e0c5e..c244b6742b9 100644 --- a/website/docs/faqs/Git/github-permissions.md +++ b/website/docs/faqs/Git/github-permissions.md @@ -40,7 +40,7 @@ Disconnect the GitHub and dbt Cloud integration in dbt Cloud. 6. Return to your **Project details** page and reconnect your repository by clicking the **Configure Repository** link. 7. Configure your repository and click **Save** - + ## Support If you've tried these workarounds and are still experiencing this behavior — reach out to the [dbt Support](mailto:support@getdbt.com) team and we'll be happy to help! diff --git a/website/docs/faqs/Git/gitignore.md b/website/docs/faqs/Git/gitignore.md index 4386a27d4f2..f5892b30b83 100644 --- a/website/docs/faqs/Git/gitignore.md +++ b/website/docs/faqs/Git/gitignore.md @@ -47,9 +47,9 @@ For more info on `gitignore` syntax, refer to the [Git docs](https://git-scm.com 11. Return to the dbt Cloud IDE and use the **Change Branch** button, to switch to the main branch of the project. 12. Once the branch has changed, click the **Pull from remote** button to pull in all the changes. -13. Verify the changes by making sure the files/folders in the `.gitignore `file are in italics. +13. Verify the changes by making sure the files/folders in the `.gitignore` file are in italics. - + ### Fix in the git provider diff --git a/website/docs/faqs/Git/managed-repo.md b/website/docs/faqs/Git/managed-repo.md index 17b75256fb6..c357fce112c 100644 --- a/website/docs/faqs/Git/managed-repo.md +++ b/website/docs/faqs/Git/managed-repo.md @@ -7,4 +7,8 @@ id: managed-repo dbt Labs can send your managed repository through a ZIP file in its current state for you to push up to a git provider. After that, you'd just need to switch over to the [repo in your project](/docs/cloud/git/import-a-project-by-git-url) to point to the new repository. -When you're ready to do this, [contact the dbt Labs Support team](mailto:support@getdbt.com) with your request and your managed repo URL, which you can find by navigating to your project setting. To find project settings, click the gear icon in the upper right, select **Account settings**, click **Projects**, and then select your project. Under **Repository** in the project details page, you can find your managed repo URL. +When you're ready to do this, [contact the dbt Labs Support team](mailto:support@getdbt.com) with your request and your managed repo URL, which you can find by navigating to your project setting. To find project settings: + +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. +2. Click **Projects**, and then select your project. +3. Under **Repository** in the project details page, you can find your managed repo URL. diff --git a/website/docs/faqs/Project/dbt-source-freshness.md b/website/docs/faqs/Project/dbt-source-freshness.md index e2554579ffc..61bd5d035ba 100644 --- a/website/docs/faqs/Project/dbt-source-freshness.md +++ b/website/docs/faqs/Project/dbt-source-freshness.md @@ -11,4 +11,4 @@ The `dbt source freshness` command will output a pass/warning/error status for e Additionally, dbt will write the freshness results to a file in the `target/` directory called `sources.json` by default. You can also override this destination, use the `-o` flag to the `dbt source freshness` command. -After enabling source freshness within a job, configure [Artifacts](/docs/deploy/artifacts) in your **Project Details** page, which you can find by clicking the gear icon and then selecting **Account settings**. You can see the current status for source freshness by clicking **View Sources** in the job page. +After enabling source freshness within a job, configure [Artifacts](/docs/deploy/artifacts) in your **Project Details** page, which you can find by selectng your account name on the left side menu in dbt Cloud and clicking **Account settings**. You can see the current status for source freshness by clicking **View Sources** in the job page. diff --git a/website/docs/faqs/Project/delete-a-project.md b/website/docs/faqs/Project/delete-a-project.md index 5fde3fee9cd..36c6bf4f160 100644 --- a/website/docs/faqs/Project/delete-a-project.md +++ b/website/docs/faqs/Project/delete-a-project.md @@ -7,12 +7,12 @@ id: delete-a-project --- To delete a project in dbt Cloud, you must be the account owner or have admin privileges. -1. From dbt Cloud, click the gear icon at the top right corner and select **Account Settings**. +1. From dbt Cloud, click on your account name in the left side menu and select **Account settings**. - + 2. In **Account Settings**, select **Projects**. Click the project you want to delete from the **Projects** page. 3. Click the edit icon in the lower right-hand corner of the **Project Details**. A **Delete** option will appear on the left side of the same details view. 4. Select **Delete**. Confirm the action to immediately delete the user without additional password prompts. There will be no account password prompt, and the project is deleted immediately after confirmation. Once a project is deleted, this action cannot be undone. - + diff --git a/website/docs/guides/adapter-creation.md b/website/docs/guides/adapter-creation.md index 278e2a9fe14..1a69be98b29 100644 --- a/website/docs/guides/adapter-creation.md +++ b/website/docs/guides/adapter-creation.md @@ -1345,8 +1345,6 @@ Breaking this down: - Implementation instructions: -- Future plans - - Contributor recognition (if applicable) diff --git a/website/docs/guides/athena-qs.md b/website/docs/guides/athena-qs.md new file mode 100644 index 00000000000..b1933bdd076 --- /dev/null +++ b/website/docs/guides/athena-qs.md @@ -0,0 +1,334 @@ +--- +title: "Quickstart for dbt Cloud and Amazon Athena" +id: "athena" +# time_to_complete: '30 minutes' commenting out until we test +level: 'Beginner' +icon: 'athena' +hide_table_of_contents: true +tags: ['Amazon','Athena', 'dbt Cloud','Quickstart'] +recently_updated: true +--- + +
+ +## Introduction + +In this quickstart guide, you'll learn how to use dbt Cloud with Amazon Athena. It will show you how to: + +- Create an S3 bucket for Athena query results. +- Creat an Athena database. +- Access sample data in a public dataset. +- Connect dbt Cloud to Amazon Athena. +- Take a sample query and turn it into a model in your dbt project. A model in dbt is a select statement. +- Add tests to your models. +- Document your models. +- Schedule a job to run. + +:::tip Videos for you +You can check out [dbt Fundamentals](https://learn.getdbt.com/courses/dbt-fundamentals) for free if you're interested in course learning with videos. +::: + +### Prerequisites​ + +- You have a [dbt Cloud account](https://www.getdbt.com/signup/). +- You have an [AWS account](https://aws.amazon.com/). +- You have set up [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/getting-started.html). + +### Related content + +- Learn more with [dbt Learn courses](https://learn.getdbt.com) +- [CI jobs](/docs/deploy/continuous-integration) +- [Deploy jobs](/docs/deploy/deploy-jobs) +- [Job notifications](/docs/deploy/job-notifications) +- [Source freshness](/docs/deploy/source-freshness) + +## Getting started + +For the following guide you can use an existing S3 bucket or [create a new one](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html). + +Download the following CSV files (the Jaffle Shop sample data) and upload them to your S3 bucket: +- [jaffle_shop_customers.csv](https://dbt-tutorial-public.s3-us-west-2.amazonaws.com/jaffle_shop_customers.csv) +- [jaffle_shop_orders.csv](https://dbt-tutorial-public.s3-us-west-2.amazonaws.com/jaffle_shop_orders.csv) +- [stripe_payments.csv](https://dbt-tutorial-public.s3-us-west-2.amazonaws.com/stripe_payments.csv) + + +## Configure Amazon Athena + +1. Log into your AWS account and navigate to the **Athena console**. + - If this is your first time in the Athena console (in your current AWS Region), click **Explore the query editor** to open the query editor. Otherwise, Athena opens automatically in the query editor. +1. Open **Settings** and find the **Location of query result box** field. + 1. Enter the path of the S3 bucket (prefix it with `s3://`). + 2. Navigate to **Browse S3**, select the S3 bucket you created, and click **Choose**. +1. **Save** these settings. +1. In the **query editor**, create a database by running `create database YOUR_DATABASE_NAME`. +1. To make the database you created the one you `write` into, select it from the **Database** list on the left side menu. +1. Access the Jaffle Shop data in the S3 bucket using one of these options: + 1. Manually create the tables. + 2. Create a glue crawler to recreate the data as external tables (recommended). +1. Once the tables have been created, you will able to `SELECT` from them. + +## Set up security access to Athena + +To setup the security access for Athena, determine which access method you want to use: +* Obtain `aws_access_key_id` and `aws_secret_access_key` (recommended) +* Obtain an **AWS credentials** file. + +### AWS access key (recommended) + +To obtain your `aws_access_key_id` and `aws_secret_access_key`: + +1. Open the **AWS Console**. +1. Click on your **username** near the top right and click **Security Credentials**. +1. Click on **Users** in the sidebar. +1. Click on your **username** (or the name of the user for whom to create the key). +1. Click on the **Security Credentials** tab. +1. Click **Create Access Key**. +1. Click **Show User Security Credentials** and + +Save the `aws_access_key_id` and `aws_secret_access_key` for a future step. + +### AWS credentials file + +To obtain your AWS credentials file: +1. Follow the instructions for [configuring the credentials file](https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-files.html) usin the AWS CLI +1. Locate the `~/.aws/credentials` file on your computer + 1. Windows: `%USERPROFILE%\.aws\credentials` + 2. Mac/Linux: `~/.aws/credentials` + +Retrieve the `aws_access_key_id` and `aws_secret_access_key` from the `~/.aws/credentials` file for a future step. + +## Configure the connection in dbt Cloud + +To configure the Athena connection in dbt Cloud: +1. Click your **account name** on the left-side menu and click **Account settings**. +1. Click **Connections** and click **New connection**. +1. Click **Athena** and fill out the required fields (and any optional fields). + 1. **AWS region name** — The AWS region of your environment. + 1. **Database (catalog)** — Enter the database name created in earlier steps (lowercase only). + 1. **AWS S3 staging directory** — Enter the S3 bucket created in earlier steps. +1. Click **Save** + +### Configure your environment + +To configure the Athena credentials in your environment: +1. Click **Deploy** on the left-side menu and click **Environments**. +1. Click **Create environment** and fill out the **General settings**. + - Your **dbt version** must be set to `Versionless` to use the Athena connection. +1. Select the Athena connection from the **Connection** dropdown. +1. Fill out the `aws_access_key` and `aws_access_id` recorded in previous steps, as well as the `Schema` to write to. +1. Click **Test connection** and once it succeeds, **Save** the environment. + +Repeat the process to create a [development environment](https://docs.getdbt.com/docs/dbt-cloud-environments#types-of-environments). + +## Set up a dbt Cloud managed repository + + +## Initialize your dbt project​ and start developing + +Now that you have a repository configured, you can initialize your project and start development in dbt Cloud: + +1. Click **Start developing in the IDE**. It might take a few minutes for your project to spin up for the first time as it establishes your git connection, clones your repo, and tests the connection to the warehouse. +2. Above the file tree to the left, click **Initialize dbt project**. This builds out your folder structure with example models. +3. Make your initial commit by clicking **Commit and sync**. Use the commit message `initial commit` and click **Commit**. This creates the first commit to your managed repo and allows you to open a branch where you can add new dbt code. +4. You can now directly query data from your warehouse and execute `dbt run`. You can try this out now: + - Click **+ Create new file**, add this query to the new file, and click **Save as** to save the new file: + ```sql + select * from jaffle_shop.customers + ``` + - In the command line bar at the bottom, enter `dbt run` and click **Enter**. You should see a `dbt run succeeded` message. + +## Build your first model + +You have two options for working with files in the dbt Cloud IDE: + +- Create a new branch (recommended) — Create a new branch to edit and commit your changes. Navigate to **Version Control** on the left sidebar and click **Create branch**. +- Edit in the protected primary branch — If you prefer to edit, format, or lint files and execute dbt commands directly in your primary git branch. The dbt Cloud IDE prevents commits to the protected branch, so you will be prompted to commit your changes to a new branch. + +Name the new branch `add-customers-model`. + +1. Click the **...** next to the `models` directory, then select **Create file**. +2. Name the file `customers.sql`, then click **Create**. +3. Copy the following query into the file and click **Save**. + +```sql +with customers as ( + + select + id as customer_id, + first_name, + last_name + + from jaffle_shop.customers + +), + +orders as ( + + select + id as order_id, + user_id as customer_id, + order_date, + status + + from jaffle_shop.orders + +), + +customer_orders as ( + + select + customer_id, + + min(order_date) as first_order_date, + max(order_date) as most_recent_order_date, + count(order_id) as number_of_orders + + from orders + + group by 1 + +), + +final as ( + + select + customers.customer_id, + customers.first_name, + customers.last_name, + customer_orders.first_order_date, + customer_orders.most_recent_order_date, + coalesce(customer_orders.number_of_orders, 0) as number_of_orders + + from customers + + left join customer_orders using (customer_id) + +) + +select * from final +``` + +4. Enter `dbt run` in the command prompt at the bottom of the screen. You should get a successful run and see the three models. + +Later, you can connect your business intelligence (BI) tools to these views and tables so they only read cleaned up data rather than raw data in your BI tool. + +#### FAQs + + + + + + + +## Change the way your model is materialized + + + +## Delete the example models + + + +## Build models on top of other models + + + +1. Create a new SQL file, `models/stg_customers.sql`, with the SQL from the `customers` CTE in our original query. +2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in our original query. + + + + ```sql + select + id as customer_id, + first_name, + last_name + + from jaffle_shop.customers + ``` + + + + + + ```sql + select + id as order_id, + user_id as customer_id, + order_date, + status + + from jaffle_shop.orders + ``` + + + +3. Edit the SQL in your `models/customers.sql` file as follows: + + + + ```sql + with customers as ( + + select * from {{ ref('stg_customers') }} + + ), + + orders as ( + + select * from {{ ref('stg_orders') }} + + ), + + customer_orders as ( + + select + customer_id, + + min(order_date) as first_order_date, + max(order_date) as most_recent_order_date, + count(order_id) as number_of_orders + + from orders + + group by 1 + + ), + + final as ( + + select + customers.customer_id, + customers.first_name, + customers.last_name, + customer_orders.first_order_date, + customer_orders.most_recent_order_date, + coalesce(customer_orders.number_of_orders, 0) as number_of_orders + + from customers + + left join customer_orders using (customer_id) + + ) + + select * from final + + ``` + + + +4. Execute `dbt run`. + + This time, when you performed a `dbt run`, separate views/tables were created for `stg_customers`, `stg_orders` and `customers`. dbt inferred the order to run these models. Because `customers` depends on `stg_customers` and `stg_orders`, dbt builds `customers` last. You do not need to explicitly define these dependencies. + + +#### FAQs {#faq-2} + + + + + +
+ + + + diff --git a/website/docs/guides/dbt-python-snowpark.md b/website/docs/guides/dbt-python-snowpark.md index 2e74c9722d8..091f1006992 100644 --- a/website/docs/guides/dbt-python-snowpark.md +++ b/website/docs/guides/dbt-python-snowpark.md @@ -286,7 +286,7 @@ We need to obtain our data source by copying our Formula 1 data into Snowflake t ## Change development schema name navigate the IDE -1. First we are going to change the name of our default schema to where our dbt models will build. By default, the name is `dbt_`. We will change this to `dbt_` to create your own personal development schema. To do this, select **Profile Settings** from the gear icon in the upper right. +1. First we are going to change the name of our default schema to where our dbt models will build. By default, the name is `dbt_`. We will change this to `dbt_` to create your own personal development schema. To do this, click on your account name in the left side menu and select **Account settings**. diff --git a/website/docs/guides/manual-install-qs.md b/website/docs/guides/manual-install-qs.md index 2e10cdac07c..816a9bd07ee 100644 --- a/website/docs/guides/manual-install-qs.md +++ b/website/docs/guides/manual-install-qs.md @@ -36,7 +36,7 @@ The following steps use [GitHub](https://github.com/) as the Git provider for th 2. Select **Public** so the repository can be shared with others. You can always make it private later. 3. Leave the default values for all other settings. 4. Click **Create repository**. -5. Save the commands from "…or create a new repository on the command line" to use later in [Commit your changes](#commit-your-changes). +5. Save the commands from "…or create a new repository on the command line" to use later in [Commit your changes](https://docs.getdbt.com/guides/manual-install?step=6). ## Create a project @@ -162,7 +162,7 @@ You should have an output that looks like this: Commit your changes so that the repository contains the latest code. -1. Link the GitHub repository you created to your dbt project by running the following commands in Terminal. Make sure you use the correct git URL for your repository, which you should have saved from step 5 in [Create a repository](#create-a-repository). +1. Link the GitHub repository you created to your dbt project by running the following commands in Terminal. Make sure you use the correct git URL for your repository, which you should have saved from step 5 in [Create a repository](https://docs.getdbt.com/guides/manual-install?step=2). ```shell git init diff --git a/website/docs/reference/data-test-configs.md b/website/docs/reference/data-test-configs.md index e7adc266b07..0044a707db1 100644 --- a/website/docs/reference/data-test-configs.md +++ b/website/docs/reference/data-test-configs.md @@ -275,3 +275,24 @@ tests: ``` + +#### Specify custom configurations for generic data tests + +Beginning in dbt v1.9, you can use any custom config key to specify custom configurations for data tests. For example, the following specifies the `snowflake_warehouse` custom config that dbt should use when executing the `accepted_values` data test: + +```yml + +models: + - name: my_model + columns: + - name: color + tests: + - accepted_values: + values: ['blue', 'red'] + config: + severity: warn + snowflake_warehouse: my_warehouse + +``` + +Given the config, the data test runs on a different Snowflake virtual warehouse than the one in your default connection to enable better price-performance with a different warehouse size or more granular cost allocation and visibility. diff --git a/website/docs/reference/dbt-jinja-functions/target.md b/website/docs/reference/dbt-jinja-functions/target.md index 968f64d0f8d..d91749277ac 100644 --- a/website/docs/reference/dbt-jinja-functions/target.md +++ b/website/docs/reference/dbt-jinja-functions/target.md @@ -10,7 +10,7 @@ The `target` variable contains information about your connection to the warehous - **dbt Core:** These values are based on the target defined in your [profiles.yml](/docs/core/connect-data-platform/profiles.yml) file. Please note that for certain adapters, additional configuration steps may be required. Refer to the [set up page](/docs/core/connect-data-platform/about-core-connections) for your data platform. - **dbt Cloud** To learn more about setting up your adapter in dbt Cloud, refer to [About data platform connections](/docs/cloud/connect-data-platform/about-connections). - **[dbt Cloud Scheduler](/docs/deploy/job-scheduler)**: `target.name` is defined per job as described in [Custom target names](/docs/build/custom-target-names). For other attributes, values are defined by the deployment connection. To check these values, click **Deploy** and select **Environments**. Then, select the relevant deployment environment, and click **Settings**. - - **[dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud)**: These values are defined by your connection and credentials. To edit these values, click the gear icon in the top right, select **Profile settings**, and click **Credentials**. Select and edit a project to set up the credentials and target name. + - **[dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud)**: These values are defined by your connection and credentials. To edit these values, click on your account name in the left side menu and select **Account settings**. Then, click **Credentials**. Select and edit a project to set up the credentials and target name. Some configurations are shared between all adapters, while others are adapter-specific. diff --git a/website/docs/reference/global-configs/about-global-configs.md b/website/docs/reference/global-configs/about-global-configs.md index 64d56d002fe..435a86d84ba 100644 --- a/website/docs/reference/global-configs/about-global-configs.md +++ b/website/docs/reference/global-configs/about-global-configs.md @@ -95,5 +95,5 @@ Because the values of `flags` can differ across invocations, we strongly advise | [use_experimental_parser](/reference/global-configs/parsing#experimental-parser) | boolean | False | ✅ | `DBT_USE_EXPERIMENTAL_PARSER` | `--use-experimental-parser`, `--no-use-experimental-parser` | ❌ | | [version_check](/reference/global-configs/version-compatibility) | boolean | varies | ✅ | `DBT_VERSION_CHECK` | `--version-check`, `--no-version-check` | ❌ | | [warn_error_options](/reference/global-configs/warnings) | dict | {} | ✅ | `DBT_WARN_ERROR_OPTIONS` | `--warn-error-options` | ✅ | -| [warn_error](/reference/global-configs/warnings) | boolean | False | ✅ | `DBT_WARN_ERROR` | `--warn-error`, `--no-warn-error` | ✅ | +| [warn_error](/reference/global-configs/warnings) | boolean | False | ✅ | `DBT_WARN_ERROR` | `--warn-error` | ✅ | | [write_json](/reference/global-configs/json-artifacts) | boolean | True | ✅ | `DBT_WRITE_JSON` | `--write-json`, `--no-write-json` | ✅ | diff --git a/website/docs/reference/global-configs/behavior-changes.md b/website/docs/reference/global-configs/behavior-changes.md index b3162c6478f..7b69ac44084 100644 --- a/website/docs/reference/global-configs/behavior-changes.md +++ b/website/docs/reference/global-configs/behavior-changes.md @@ -69,8 +69,8 @@ When we use dbt Cloud in the following table, we're referring to accounts that h | Flag | dbt Cloud: Intro | dbt Cloud: Maturity | dbt Core: Intro | dbt Core: Maturity | |-----------------------------------------------------------------|------------------|---------------------|-----------------|--------------------| | [require_explicit_package_overrides_for_builtin_materializations](#package-override-for-built-in-materialization) | 2024.04 | 2024.06 | 1.6.14, 1.7.14 | 1.8.0 | -| [require_resource_names_without_spaces](#no-spaces-in-resource-names) | 2024.05 | TBD* | 1.8.0 | 1.9.0 | -| [source_freshness_run_project_hooks](#project-hooks-with-source-freshness) | 2024.03 | TBD* | 1.8.0 | 1.9.0 | +| [require_resource_names_without_spaces](#no-spaces-in-resource-names) | 2024.05 | TBD* | 1.8.0 | 1.10.0 | +| [source_freshness_run_project_hooks](#project-hooks-with-source-freshness) | 2024.03 | TBD* | 1.8.0 | 1.10.0 | | [Redshift] [restrict_direct_pg_catalog_access](/reference/global-configs/redshift-changes#the-restrict_direct_pg_catalog_access-flag) | 2024.09 | TBD* | dbt-redshift v1.9.0 | 1.9.0 | | [skip_nodes_if_on_run_start_fails](#failures-in-on-run-start-hooks) | 2024.10 | TBD* | 1.9.0 | TBD* | | [state_modified_compare_more_unrendered_values](#source-definitions-for-state) | 2024.10 | TBD* | 1.9.0 | TBD* | diff --git a/website/docs/reference/model-configs.md b/website/docs/reference/model-configs.md index 65133dcb25a..9508cf68ceb 100644 --- a/website/docs/reference/model-configs.md +++ b/website/docs/reference/model-configs.md @@ -104,6 +104,8 @@ models: + + ```yaml models: [](/reference/resource-configs/resource-path): @@ -121,7 +123,29 @@ models: [+](/reference/resource-configs/plus-prefix)[contract](/reference/resource-configs/contract): {} ``` + + + + +```yaml +models: + [](/reference/resource-configs/resource-path): + [+](/reference/resource-configs/plus-prefix)[enabled](/reference/resource-configs/enabled): true | false + [+](/reference/resource-configs/plus-prefix)[tags](/reference/resource-configs/tags): | [] + [+](/reference/resource-configs/plus-prefix)[pre-hook](/reference/resource-configs/pre-hook-post-hook): | [] + [+](/reference/resource-configs/plus-prefix)[post-hook](/reference/resource-configs/pre-hook-post-hook): | [] + [+](/reference/resource-configs/plus-prefix)[database](/reference/resource-configs/database): + [+](/reference/resource-configs/plus-prefix)[schema](/reference/resource-properties/schema): + [+](/reference/resource-configs/plus-prefix)[alias](/reference/resource-configs/alias): + [+](/reference/resource-configs/plus-prefix)[persist_docs](/reference/resource-configs/persist_docs): + [+](/reference/resource-configs/plus-prefix)[full_refresh](/reference/resource-configs/full_refresh): + [+](/reference/resource-configs/plus-prefix)[meta](/reference/resource-configs/meta): {} + [+](/reference/resource-configs/plus-prefix)[grants](/reference/resource-configs/grants): {} + [+](/reference/resource-configs/plus-prefix)[contract](/reference/resource-configs/contract): {} + [+](/reference/resource-configs/plus-prefix)[event_time](/reference/resource-configs/event-time): my_time_field +``` + @@ -131,6 +155,8 @@ models: + + ```yaml version: 2 @@ -150,17 +176,63 @@ models: [grants](/reference/resource-configs/grants): {} [contract](/reference/resource-configs/contract): {} ``` + - + - +```yaml +version: 2 +models: + - name: [] + config: + [enabled](/reference/resource-configs/enabled): true | false + [tags](/reference/resource-configs/tags): | [] + [pre_hook](/reference/resource-configs/pre-hook-post-hook): | [] + [post_hook](/reference/resource-configs/pre-hook-post-hook): | [] + [database](/reference/resource-configs/database): + [schema](/reference/resource-properties/schema): + [alias](/reference/resource-configs/alias): + [persist_docs](/reference/resource-configs/persist_docs): + [full_refresh](/reference/resource-configs/full_refresh): + [meta](/reference/resource-configs/meta): {} + [grants](/reference/resource-configs/grants): {} + [contract](/reference/resource-configs/contract): {} + [event_time](/reference/resource-configs/event-time): my_time_field +``` + + + + + + +```jinja + +{{ config( + [enabled](/reference/resource-configs/enabled)=true | false, + [tags](/reference/resource-configs/tags)="" | [""], + [pre_hook](/reference/resource-configs/pre-hook-post-hook)="" | [""], + [post_hook](/reference/resource-configs/pre-hook-post-hook)="" | [""], + [database](/reference/resource-configs/database)="", + [schema](/reference/resource-properties/schema)="", + [alias](/reference/resource-configs/alias)="", + [persist_docs](/reference/resource-configs/persist_docs)={}, + [meta](/reference/resource-configs/meta)={}, + [grants](/reference/resource-configs/grants)={}, + [contract](/reference/resource-configs/contract)={} +) }} + +``` + + + + ```jinja {{ config( @@ -175,9 +247,11 @@ models: [meta](/reference/resource-configs/meta)={}, [grants](/reference/resource-configs/grants)={}, [contract](/reference/resource-configs/contract)={} + [event_time](/reference/resource-configs/event-time): my_time_field ) }} ``` + diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index f807b1c0d88..c77f3494aa7 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -51,7 +51,7 @@ We do not yet have a PySpark API to set tblproperties at table creation, so this -dbt Core v.9 and Versionless dbt Clouyd support for `table_format: iceberg`, in addition to all previous table configurations supported in 1.8. +dbt Core v.9 and Versionless dbt Cloud support for `table_format: iceberg`, in addition to all previous table configurations supported in 1.8. | Option | Description | Required? | Model Support | Example | |---------------------|-----------------------------|-------------------------------------------|-----------------|--------------------------| @@ -1031,7 +1031,7 @@ The following table summarizes our configuration support: partition_by='id', schedule = { 'cron': '0 0 * * * ? *', - 'time_zone': 'Etc/UTC' + 'time_zone_value': 'Etc/UTC' }, tblproperties={ 'key': 'value' diff --git a/website/docs/reference/resource-configs/dbt_valid_to_current.md b/website/docs/reference/resource-configs/dbt_valid_to_current.md new file mode 100644 index 00000000000..7c0e33aa5d7 --- /dev/null +++ b/website/docs/reference/resource-configs/dbt_valid_to_current.md @@ -0,0 +1,116 @@ +--- +resource_types: [snapshots] +description: "Use the `dbt_valid_to_current` config to set a custom indicator for the value of `dbt_valid_to` in current snapshot records" +datatype: "{}" +default_value: {NULL} +id: "dbt_valid_to_current" +--- + +Available from dbt v1.9 or with [Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) dbt Cloud. + + + +```yaml +snapshots: + my_project: + +dbt_valid_to_current: "to_date('9999-12-31')" + +``` + + + + + +```sql +{{ + config( + unique_key='id', + strategy='timestamp', + updated_at='updated_at', + dbt_valid_to_current='to_date('9999-12-31')' + ) +}} +``` + + + + + +```yml +snapshots: + [](/reference/resource-configs/resource-path): + +dbt_valid_to_current: "to_date('9999-12-31')" +``` + + + +## Description + +Use the `dbt_valid_to_current` config to set a custom indicator for the value of `dbt_valid_to` in current snapshot records (like a future date). By default, this value is `NULL`. When set, dbt will use this specified value instead of `NULL` for `dbt_valid_to` for current records in the snapshot table. + +This approach makes it easier to assign a custom date, work in a join, or perform range-based filtering that requires an end date. + +:::warning + +To avoid any unintentional data modification, dbt will _not_ automatically adjust the current value in the existing `dbt_valid_to` column. Existing current records will still have `dbt_valid_to` set to `NULL`. + +Any new records inserted _after_ applying the `dbt_valid_to_current` configuration will have `dbt_valid_to` set to the specified value (like '9999-12-31'), instead of the default `NULL` value. + +::: + +### Considerations + +- **Date expressions** — Provide a hardcoded date expression compatible with your data platform, such as to_date`('9999-12-31')`. Note that syntax may vary by warehouse (for example, `to_date('YYYY-MM-DD'`) or `date(YYYY, MM, DD)`). + +- **Jinja limitation** — `dbt_valid_to_current` only accepts static SQL expressions. Jinja expressions (like `{{ var('my_future_date') }}`) are not supported. + +- **Deferral and `state:modified`** — Changes to `dbt_valid_to_current` are compatible with deferral and `--select state:modified`. When this configuration changes, it'll appear in `state:modified` selections, raising a warning to manually make the necessary snapshot updates. + +## Default + +By default, `dbt_valid_to` is set to `NULL` for current (most recent) records in your snapshot table. This means that these records are still valid and have no defined end date. + +If you prefer to use a specific value instead of `NULL` for `dbt_valid_to` in current and future records, you can use the `dbt_valid_to_current` configuration option. For example, setting a date in the far future, `9999-12-31`. + +The value assigned to `dbt_valid_to_current` should be a string representing a valid date or timestamp, depending on your database's requirements. Use expressions that work within the data platform. + + +## Impact on snapshot records + +When you set `dbt_valid_to_current`, it affects how dbt manages the `dbt_valid_to` column in your snapshot table: + +- **For existing records** — To avoid any unintentional data modification, dbt will _not_ automatically adjust the current value in the existing `dbt_valid_to` column. Existing current records will still have `dbt_valid_to` set to `NULL`. + +- **For new records** — Any new records inserted after applying the `dbt_valid_to_current` configuration will have `dbt_valid_to` set to the specified value (for example, '9999-12-31'), instead of `NULL`. + +This means your snapshot table will have current records with `dbt_valid_to` values of both `NULL` (from existing data) and the new specified value (from new data). If you'd rather have consistent `dbt_valid_to` values for current records, you can manually update existing records in your snapshot table (where `dbt_valid_to` is `NULL`) to match your `dbt_valid_to_current` value. + +## Example + + + +```yaml +snapshots: + - name: my_snapshot + config: + strategy: timestamp + updated_at: updated_at + dbt_valid_to_current: "to_date('9999-12-31')" + columns: + - name: dbt_valid_from + description: The timestamp when the record became valid. + - name: dbt_valid_to + description: > + The timestamp when the record ceased to be valid. For current records, + this is either `NULL` or the value specified in `dbt_valid_to_current` + (like `'9999-12-31'`). +``` + + + +The resulting snapshot table contains the configured dbt_valid_to column value: + +| id | dbt_scd_id | dbt_updated_at | dbt_valid_from | dbt_valid_to | +| -- | -------------------- | -------------------- | -------------------- | -------------------- | +| 1 | 60a1f1dbdf899a4dd... | 2024-10-02 ... | 2024-10-02 ... | 9999-12-31 ... | +| 2 | b1885d098f8bcff51... | 2024-10-02 ... | 2024-10-02 ... | 9999-12-31 ... | diff --git a/website/docs/reference/resource-configs/event-time.md b/website/docs/reference/resource-configs/event-time.md new file mode 100644 index 00000000000..d8c0c0e0472 --- /dev/null +++ b/website/docs/reference/resource-configs/event-time.md @@ -0,0 +1,284 @@ +--- +title: "event_time" +id: "event-time" +sidebar_label: "event_time" +resource_types: [models, seeds, source] +description: "dbt uses event_time to understand when an event occurred. When defined, event_time enables microbatch incremental models and more refined comparison of datasets during Advanced CI." +datatype: string +--- + +Available in dbt Cloud Versionless and dbt Core v1.9 and higher. + + + + + + +```yml +models: + [resource-path:](/reference/resource-configs/resource-path) + +event_time: my_time_field +``` + + + + + +```yml +models: + - name: model_name + [config](/reference/resource-properties/config): + event_time: my_time_field +``` + + + + +```sql +{{ config( + event_time='my_time_field' +) }} +``` + + + + + + + + + +```yml +seeds: + [resource-path:](/reference/resource-configs/resource-path) + +event_time: my_time_field +``` + + + + +```yml +seeds: + - name: seed_name + [config](/reference/resource-properties/config): + event_time: my_time_field +``` + + + + + + + + +```yml +snapshots: + [resource-path:](/reference/resource-configs/resource-path) + +event_time: my_time_field +``` + + + + + +```yml +snapshots: + - name: snapshot_name + [config](/reference/resource-properties/config): + event_time: my_time_field +``` + + + + + + + +```sql + +{{ config( + event_time: 'my_time_field' +) }} +``` + + + + +import SnapshotYaml from '/snippets/_snapshot-yaml-spec.md'; + + + + + + + + + + + + +```yml +sources: + [resource-path:](/reference/resource-configs/resource-path) + +event_time: my_time_field +``` + + + + +```yml +sources: + - name: source_name + [config](/reference/resource-properties/config): + event_time: my_time_field +``` + + + + + +## Definition + +Set the `event_time` to the name of the field that represents the timestamp of the event -- "at what time did the row occur" -- as opposed to an event ingestion date. You can configure `event_time` for a [model](/docs/build/models), [seed](/docs/build/seeds), or [source](/docs/build/sources) in your `dbt_project.yml` file, property YAML file, or config block. + +Here are some examples of good and bad `event_time` columns: + +- ✅ Good: + - `account_created_at` — This represents the specific time when an account was created, making it a fixed event in time. + - `session_began_at` — This captures the exact timestamp when a user session started, which won’t change and directly ties to the event. + +- ❌ Bad: + + - `_fivetran_synced` — This isn't the time that the event happened, it's the time that the event was ingested. + - `last_updated_at` — This isn't a good use case as this will keep changing over time. + +`event_time` is required for [Incremental microbatch](/docs/build/incremental-microbatch) and highly recommended for [Advanced CI's compare changes](/docs/deploy/advanced-ci#optimizing-comparisons) in CI/CD workflows, where it ensures the same time-slice of data is correctly compared between your CI and production environments. + +## Examples + + + + + +Here's an example in the `dbt_project.yml` file: + + + +```yml +models: + my_project: + user_sessions: + +event_time: session_start_time +``` + + +Example in a properties YAML file: + + + +```yml +models: + - name: user_sessions + config: + event_time: session_start_time +``` + + + +Example in sql model config block: + + + +```sql +{{ config( + event_time='session_start_time' +) }} +``` + + + +This setup sets `session_start_time` as the `event_time` for the `user_sessions` model. + + + + +Here's an example in the `dbt_project.yml` file: + + + +```yml +seeds: + my_project: + my_seed: + +event_time: record_timestamp +``` + + + +Example in a seed properties YAML: + + + +```yml +seeds: + - name: my_seed + config: + event_time: record_timestamp +``` + + +This setup sets `record_timestamp` as the `event_time` for `my_seed`. + + + + + +Here's an example in the `dbt_project.yml` file: + + + +```yml +snapshots: + my_project: + my_snapshot: + +event_time: record_timestamp +``` + + + +Example in a snapshot properties YAML: + + + +```yml +snapshots: + - name: my_snapshot + config: + event_time: record_timestamp +``` + + +This setup sets `record_timestamp` as the `event_time` for `my_snapshot`. + + + + + +Here's an example of source properties YAML file: + + + +```yml +sources: + - name: source_name + tables: + - name: table_name + config: + event_time: event_timestamp +``` + + +This setup sets `event_timestamp` as the `event_time` for the specified source table. + + + diff --git a/website/docs/reference/seed-configs.md b/website/docs/reference/seed-configs.md index 5d5c39071d6..a18f1fc28f7 100644 --- a/website/docs/reference/seed-configs.md +++ b/website/docs/reference/seed-configs.md @@ -79,6 +79,8 @@ seeds: + + ```yaml seeds: [](/reference/resource-configs/resource-path): @@ -95,7 +97,28 @@ seeds: [+](/reference/resource-configs/plus-prefix)[grants](/reference/resource-configs/grants): {} ``` + + + + +```yaml +seeds: + [](/reference/resource-configs/resource-path): + [+](/reference/resource-configs/plus-prefix)[enabled](/reference/resource-configs/enabled): true | false + [+](/reference/resource-configs/plus-prefix)[tags](/reference/resource-configs/tags): | [] + [+](/reference/resource-configs/plus-prefix)[pre-hook](/reference/resource-configs/pre-hook-post-hook): | [] + [+](/reference/resource-configs/plus-prefix)[post-hook](/reference/resource-configs/pre-hook-post-hook): | [] + [+](/reference/resource-configs/plus-prefix)[database](/reference/resource-configs/database): + [+](/reference/resource-configs/plus-prefix)[schema](/reference/resource-properties/schema): + [+](/reference/resource-configs/plus-prefix)[alias](/reference/resource-configs/alias): + [+](/reference/resource-configs/plus-prefix)[persist_docs](/reference/resource-configs/persist_docs): + [+](/reference/resource-configs/plus-prefix)[full_refresh](/reference/resource-configs/full_refresh): + [+](/reference/resource-configs/plus-prefix)[meta](/reference/resource-configs/meta): {} + [+](/reference/resource-configs/plus-prefix)[grants](/reference/resource-configs/grants): {} + [+](/reference/resource-configs/plus-prefix)[event_time](/reference/resource-configs/event-time): my_time_field +``` + @@ -105,6 +128,8 @@ seeds: + + ```yaml version: 2 @@ -122,13 +147,36 @@ seeds: [full_refresh](/reference/resource-configs/full_refresh): [meta](/reference/resource-configs/meta): {} [grants](/reference/resource-configs/grants): {} + [event_time](/reference/resource-configs/event-time): my_time_field + +``` + + + + +```yaml +version: 2 +seeds: + - name: [] + config: + [enabled](/reference/resource-configs/enabled): true | false + [tags](/reference/resource-configs/tags): | [] + [pre_hook](/reference/resource-configs/pre-hook-post-hook): | [] + [post_hook](/reference/resource-configs/pre-hook-post-hook): | [] + [database](/reference/resource-configs/database): + [schema](/reference/resource-properties/schema): + [alias](/reference/resource-configs/alias): + [persist_docs](/reference/resource-configs/persist_docs): + [full_refresh](/reference/resource-configs/full_refresh): + [meta](/reference/resource-configs/meta): {} + [grants](/reference/resource-configs/grants): {} ``` + - ## Configuring seeds diff --git a/website/docs/reference/snapshot-configs.md b/website/docs/reference/snapshot-configs.md index 144ecafde9d..7b3c0f8e5b1 100644 --- a/website/docs/reference/snapshot-configs.md +++ b/website/docs/reference/snapshot-configs.md @@ -168,6 +168,24 @@ Configurations can be applied to snapshots using the [YAML syntax](/docs/build/s + + +```yaml +snapshots: + [](/reference/resource-configs/resource-path): + [+](/reference/resource-configs/plus-prefix)[enabled](/reference/resource-configs/enabled): true | false + [+](/reference/resource-configs/plus-prefix)[tags](/reference/resource-configs/tags): | [] + [+](/reference/resource-configs/plus-prefix)[alias](/reference/resource-configs/alias): + [+](/reference/resource-configs/plus-prefix)[pre-hook](/reference/resource-configs/pre-hook-post-hook): | [] + [+](/reference/resource-configs/plus-prefix)[post-hook](/reference/resource-configs/pre-hook-post-hook): | [] + [+](/reference/resource-configs/plus-prefix)[persist_docs](/reference/resource-configs/persist_docs): {} + [+](/reference/resource-configs/plus-prefix)[grants](/reference/resource-configs/grants): {} + [+](/reference/resource-configs/plus-prefix)[event_time](/reference/resource-configs/event-time): my_time_field +``` + + + + ```yaml snapshots: [](/reference/resource-configs/resource-path): @@ -179,6 +197,7 @@ snapshots: [+](/reference/resource-configs/plus-prefix)[persist_docs](/reference/resource-configs/persist_docs): {} [+](/reference/resource-configs/plus-prefix)[grants](/reference/resource-configs/grants): {} ``` + @@ -198,8 +217,8 @@ snapshots: [enabled](/reference/resource-configs/enabled): true | false [tags](/reference/resource-configs/tags): | [] [alias](/reference/resource-configs/alias): - [pre-hook](/reference/resource-configs/pre-hook-post-hook): | [] - [post-hook](/reference/resource-configs/pre-hook-post-hook): | [] + [pre_hook](/reference/resource-configs/pre-hook-post-hook): | [] + [post_hook](/reference/resource-configs/pre-hook-post-hook): | [] [persist_docs](/reference/resource-configs/persist_docs): {} [grants](/reference/resource-configs/grants): {} ``` @@ -221,10 +240,11 @@ snapshots: [enabled](/reference/resource-configs/enabled): true | false [tags](/reference/resource-configs/tags): | [] [alias](/reference/resource-configs/alias): - [pre-hook](/reference/resource-configs/pre-hook-post-hook): | [] - [post-hook](/reference/resource-configs/pre-hook-post-hook): | [] + [pre_hook](/reference/resource-configs/pre-hook-post-hook): | [] + [post_hook](/reference/resource-configs/pre-hook-post-hook): | [] [persist_docs](/reference/resource-configs/persist_docs): {} [grants](/reference/resource-configs/grants): {} + [event_time](/reference/resource-configs/event-time): my_time_field ``` @@ -292,7 +312,6 @@ The following examples demonstrate how to configure snapshots using the `dbt_pro ```yml - snapshots: +unique_key: id ``` @@ -307,7 +326,6 @@ The following examples demonstrate how to configure snapshots using the `dbt_pro ```yml - snapshots: jaffle_shop: +unique_key: id diff --git a/website/docs/reference/source-configs.md b/website/docs/reference/source-configs.md index 64dda8bffde..c5264e82fc7 100644 --- a/website/docs/reference/source-configs.md +++ b/website/docs/reference/source-configs.md @@ -8,7 +8,17 @@ import ConfigGeneral from '/snippets/_config-description-general.md'; ## Available configurations -Sources only support one configuration, [`enabled`](/reference/resource-configs/enabled). + + +Sources supports [`enabled`](/reference/resource-configs/enabled) and [`meta`](/reference/resource-configs/meta). + + + + + +Sources configurations support [`enabled`](/reference/resource-configs/enabled), [`event_time`](/reference/resource-configs/event-time), and [`meta`](/reference/resource-configs/meta) + + ### General configurations @@ -27,12 +37,29 @@ Sources only support one configuration, [`enabled`](/reference/resource-configs/ + + ```yaml sources: [](/reference/resource-configs/resource-path): [+](/reference/resource-configs/plus-prefix)[enabled](/reference/resource-configs/enabled): true | false + [+](/reference/resource-configs/plus-prefix)[event_time](/reference/resource-configs/event-time): my_time_field + [+](/reference/resource-configs/plus-prefix)[meta](/reference/resource-configs/meta): + key: value ``` + + + + +```yaml +sources: + [](/reference/resource-configs/resource-path): + [+](/reference/resource-configs/plus-prefix)[enabled](/reference/resource-configs/enabled): true | false + [+](/reference/resource-configs/plus-prefix)[meta](/reference/resource-configs/meta): + key: value +``` + @@ -43,6 +70,8 @@ sources: + + ```yaml version: 2 @@ -50,12 +79,37 @@ sources: - name: [] [config](/reference/resource-properties/config): [enabled](/reference/resource-configs/enabled): true | false + [event_time](/reference/resource-configs/event-time): my_time_field + [meta](/reference/resource-configs/meta): {} + tables: - name: [] [config](/reference/resource-properties/config): [enabled](/reference/resource-configs/enabled): true | false + [event_time](/reference/resource-configs/event-time): my_time_field + [meta](/reference/resource-configs/meta): {} ``` + + + + +```yaml +version: 2 + +sources: + - name: [] + [config](/reference/resource-properties/config): + [enabled](/reference/resource-configs/enabled): true | false + [meta](/reference/resource-configs/meta): {} + tables: + - name: [] + [config](/reference/resource-properties/config): + [enabled](/reference/resource-configs/enabled): true | false + [meta](/reference/resource-configs/meta): {} + +``` + @@ -74,6 +128,8 @@ You can disable sources imported from a package to prevent them from rendering i + + ```yaml sources: your_project_name: @@ -81,11 +137,34 @@ You can disable sources imported from a package to prevent them from rendering i source_name: source_table_name: +enabled: false + +event_time: my_time_field ``` + + + + + ```yaml + sources: + your_project_name: + subdirectory_name: + source_name: + source_table_name: + +enabled: false + ``` + ### Examples + +The following examples show how to configure sources in your dbt project. + +— [Disable all sources imported from a package](#disable-all-sources-imported-from-a-package)
+— [Conditionally enable a single source](#conditionally-enable-a-single-source)
+— [Disable a single source from a package](#disable-a-single-source-from-a-package)
+— [Configure a source with an `event_time`](#configure-a-source-with-an-event_time)
+— [Configure meta to a source](#configure-meta-to-a-source)
+ #### Disable all sources imported from a package To apply a configuration to all sources included from a [package](/docs/build/packages), state your configuration under the [project name](/reference/project-configs/name.md) in the @@ -172,6 +251,53 @@ sources:
+#### Configure a source with an `event_time` + + + +Configuring an [`event_time`](/reference/resource-configs/event-time) for a source is only available in dbt Cloud Versionless or dbt Core versions 1.9 and later. + + + + + +To configure a source with an `event_time`, specify the `event_time` field in the source configuration. This field is used to represent the actual timestamp of the event, rather than something like a loading date. + +For example, if you had a source table called `clickstream` in the `events` source, you can use the timestamp for each event in the `event_timestamp` column as follows: + + + +```yaml +sources: + events: + clickstream: + +event_time: event_timestamp +``` + + +In this example, the `event_time` is set to `event_timestamp`, which has the exact time each clickstream event happened. +Not only is this required for the [incremental microbatching strategy](/docs/build/incremental-microbatch), but when you compare data across [CI and production](/docs/deploy/advanced-ci#speeding-up-comparisons) environments, dbt will use `event_timestamp` to filter and match data by this event-based timeframe, ensuring that only overlapping timeframes are compared. + + + +#### Configure meta to a source + +Use the `meta` field to assign metadata information to sources. This is useful for tracking additional context, documentation, logging, and more. + +For example, you can add `meta` information to a `clickstream` source to include information about the data source system: + + + +```yaml +sources: + events: + clickstream: + +meta: + source_system: "Google analytics" + data_owner: "marketing_team" +``` + + ## Example source configuration The following is a valid source configuration for a project with: * `name: jaffle_shop` diff --git a/website/package-lock.json b/website/package-lock.json index df0c9652529..936f05624bb 100644 --- a/website/package-lock.json +++ b/website/package-lock.json @@ -5,6 +5,7 @@ "requires": true, "packages": { "": { + "name": "website", "version": "0.0.0", "dependencies": { "@docusaurus/core": "3.4.0", @@ -13,9 +14,9 @@ "@docusaurus/theme-search-algolia": "3.4.0", "@mdx-js/react": "^3.0.1", "@monaco-editor/react": "^4.4.6", - "@stoplight/elements": "^7.7.17", + "@stoplight/elements": "^7.5.8", "@svgr/webpack": "^6.0.0", - "axios": "^0.27.2", + "axios": "^1.7.7", "canvas-confetti": "^1.9.2", "classnames": "^2.3.1", "clsx": "^1.1.1", @@ -33,7 +34,6 @@ "papaparse": "^5.3.2", "prism-react-renderer": "^2.3.1", "query-string": "^8.1.0", - "raw-loader": "^4.0.2", "react": "^18.2.0", "react-dom": "^18.2.0", "react-full-screen": "^1.1.1", @@ -41,7 +41,7 @@ "react-select": "^5.7.5", "react-tooltip": "^4.2.21", "redoc": "^2.0.0-rc.57", - "rehype-katex": "^5.0.0", + "rehype-katex": "^7.0.1", "remark-math": "^3.0.1", "sanitize-html": "^2.8.0", "slugify": "^1.6.1", @@ -58,7 +58,7 @@ "@testing-library/user-event": "^14.5.2", "@typescript-eslint/eslint-plugin": "^5.54.0", "@typescript-eslint/parser": "^5.54.0", - "css-loader": "^3.4.2", + "css-loader": "^7.1.2", "cypress": "^13.11.0", "dotenv": "^10.0.0", "eslint": "^8.35.0", @@ -71,6 +71,7 @@ "lint-staged": "^13.1.2", "path-browserify": "^1.0.1", "process": "^0.11.10", + "raw-loader": "^4.0.2", "stream-http": "^3.2.0", "style-loader": "^1.1.3", "svg-inline-loader": "^0.8.2", @@ -2488,72 +2489,6 @@ } } }, - "node_modules/@docusaurus/core/node_modules/icss-utils": { - "version": "5.1.0", - "resolved": "https://registry.npmjs.org/icss-utils/-/icss-utils-5.1.0.tgz", - "integrity": "sha512-soFhflCVWLfRNOPU3iv5Z9VUdT44xFRbzjLsEzSr5AQmgqPMTHdU3PMT1Cf1ssx8fLNJDA1juftYl+PUcv3MqA==", - "engines": { - "node": "^10 || ^12 || >= 14" - }, - "peerDependencies": { - "postcss": "^8.1.0" - } - }, - "node_modules/@docusaurus/core/node_modules/postcss-modules-extract-imports": { - "version": "3.1.0", - "resolved": "https://registry.npmjs.org/postcss-modules-extract-imports/-/postcss-modules-extract-imports-3.1.0.tgz", - "integrity": "sha512-k3kNe0aNFQDAZGbin48pL2VNidTF0w4/eASDsxlyspobzU3wZQLOGj7L9gfRe0Jo9/4uud09DsjFNH7winGv8Q==", - "engines": { - "node": "^10 || ^12 || >= 14" - }, - "peerDependencies": { - "postcss": "^8.1.0" - } - }, - "node_modules/@docusaurus/core/node_modules/postcss-modules-local-by-default": { - "version": "4.0.5", - "resolved": "https://registry.npmjs.org/postcss-modules-local-by-default/-/postcss-modules-local-by-default-4.0.5.tgz", - "integrity": "sha512-6MieY7sIfTK0hYfafw1OMEG+2bg8Q1ocHCpoWLqOKj3JXlKu4G7btkmM/B7lFubYkYWmRSPLZi5chid63ZaZYw==", - "dependencies": { - "icss-utils": "^5.0.0", - "postcss-selector-parser": "^6.0.2", - "postcss-value-parser": "^4.1.0" - }, - "engines": { - "node": "^10 || ^12 || >= 14" - }, - "peerDependencies": { - "postcss": "^8.1.0" - } - }, - "node_modules/@docusaurus/core/node_modules/postcss-modules-scope": { - "version": "3.2.0", - "resolved": "https://registry.npmjs.org/postcss-modules-scope/-/postcss-modules-scope-3.2.0.tgz", - "integrity": "sha512-oq+g1ssrsZOsx9M96c5w8laRmvEu9C3adDSjI8oTcbfkrTE8hx/zfyobUoWIxaKPO8bt6S62kxpw5GqypEw1QQ==", - "dependencies": { - "postcss-selector-parser": "^6.0.4" - }, - "engines": { - "node": "^10 || ^12 || >= 14" - }, - "peerDependencies": { - "postcss": "^8.1.0" - } - }, - "node_modules/@docusaurus/core/node_modules/postcss-modules-values": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/postcss-modules-values/-/postcss-modules-values-4.0.0.tgz", - "integrity": "sha512-RDxHkAiEGI78gS2ofyvCsu7iycRv7oqw5xMWn9iMoR0N/7mf9D50ecQqUo5BZ9Zh2vH4bCUR/ktCqbB9m8vJjQ==", - "dependencies": { - "icss-utils": "^5.0.0" - }, - "engines": { - "node": "^10 || ^12 || >= 14" - }, - "peerDependencies": { - "postcss": "^8.1.0" - } - }, "node_modules/@docusaurus/cssnano-preset": { "version": "3.4.0", "resolved": "https://registry.npmjs.org/@docusaurus/cssnano-preset/-/cssnano-preset-3.4.0.tgz", @@ -5015,9 +4950,9 @@ } }, "node_modules/@stoplight/json-schema-viewer": { - "version": "4.16.1", - "resolved": "https://registry.npmjs.org/@stoplight/json-schema-viewer/-/json-schema-viewer-4.16.1.tgz", - "integrity": "sha512-gQ1v9/Dj1VP43zERuZoFMOr7RQDBZlgfF7QFh+R0sadP6W30oYFJtD7y2PG2gIQDohKElVuPjhFUbVH/81MnSg==", + "version": "4.16.2", + "resolved": "https://registry.npmjs.org/@stoplight/json-schema-viewer/-/json-schema-viewer-4.16.2.tgz", + "integrity": "sha512-sOODscuidOTk9OMbE41XO5zt7DjKn6eoS32VtC5SJ0TbRT2vXfYVc9wrHLeae2YsNjsh98Nh+LaquGF504Ye2Q==", "dependencies": { "@stoplight/json": "^3.20.1", "@stoplight/json-schema-tree": "^4.0.0", @@ -6997,9 +6932,9 @@ "integrity": "sha512-wOuvG1SN4Us4rez+tylwwwCV1psiNVOkJeM3AUWUNWg/jDQY2+HE/444y5gc+jBmRqASOm2Oeh5c1axHobwRKQ==" }, "node_modules/@types/katex": { - "version": "0.11.1", - "resolved": "https://registry.npmjs.org/@types/katex/-/katex-0.11.1.tgz", - "integrity": "sha512-DUlIj2nk0YnJdlWgsFuVKcX27MLW0KbKmGVoUHmFr+74FYYNUDAaj9ZqTADvsbE8rfxuVmSFc7KczYn5Y09ozg==" + "version": "0.16.7", + "resolved": "https://registry.npmjs.org/@types/katex/-/katex-0.16.7.tgz", + "integrity": "sha512-HMwFiRujE5PjrgwHQ25+bsLJgowjGjm5Z8FVSf0N6PwgJrwxH0QxzHYDcKsTfV3wva0vzrpqMTJS2jXPr5BMEQ==" }, "node_modules/@types/mdast": { "version": "3.0.15", @@ -8190,14 +8125,20 @@ "dev": true }, "node_modules/axios": { - "version": "0.27.2", - "resolved": "https://registry.npmjs.org/axios/-/axios-0.27.2.tgz", - "integrity": "sha512-t+yRIyySRTp/wua5xEr+z1q60QmLq8ABsS5O9Me1AsE5dfKqgnCFzwiCZZ/cGNd1lq4/7akDWMxdhVlucjmnOQ==", + "version": "1.7.7", + "resolved": "https://registry.npmjs.org/axios/-/axios-1.7.7.tgz", + "integrity": "sha512-S4kL7XrjgBmvdGut0sN3yJxqYzrDOnivkBiN0OFs6hLiUam3UPvswUo0kqGyhqUZGEOytHyumEdXsAkgCOUf3Q==", "dependencies": { - "follow-redirects": "^1.14.9", - "form-data": "^4.0.0" + "follow-redirects": "^1.15.6", + "form-data": "^4.0.0", + "proxy-from-env": "^1.1.0" } }, + "node_modules/axios/node_modules/proxy-from-env": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-1.1.0.tgz", + "integrity": "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==" + }, "node_modules/b4a": { "version": "1.6.7", "resolved": "https://registry.npmjs.org/b4a/-/b4a-1.6.7.tgz", @@ -9747,9 +9688,9 @@ "integrity": "sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg==" }, "node_modules/cookie": { - "version": "0.6.0", - "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.6.0.tgz", - "integrity": "sha512-U71cyTamuh1CRNCfpGY6to28lxvNwPG4Guz/EVjgf3Jmzv0vlDp1atT9eS5dDjMYHucpHbWns6Lwf3BKz6svdw==", + "version": "0.7.1", + "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz", + "integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==", "engines": { "node": ">= 0.6" } @@ -10048,128 +9989,38 @@ } }, "node_modules/css-loader": { - "version": "3.6.0", - "resolved": "https://registry.npmjs.org/css-loader/-/css-loader-3.6.0.tgz", - "integrity": "sha512-M5lSukoWi1If8dhQAUCvj4H8vUt3vOnwbQBH9DdTm/s4Ym2B/3dPMtYZeJmq7Q3S3Pa+I94DcZ7pc9bP14cWIQ==", + "version": "7.1.2", + "resolved": "https://registry.npmjs.org/css-loader/-/css-loader-7.1.2.tgz", + "integrity": "sha512-6WvYYn7l/XEGN8Xu2vWFt9nVzrCn39vKyTEFf/ExEyoksJjjSZV/0/35XPlMbpnr6VGhZIUg5yJrL8tGfes/FA==", "dev": true, "dependencies": { - "camelcase": "^5.3.1", - "cssesc": "^3.0.0", - "icss-utils": "^4.1.1", - "loader-utils": "^1.2.3", - "normalize-path": "^3.0.0", - "postcss": "^7.0.32", - "postcss-modules-extract-imports": "^2.0.0", - "postcss-modules-local-by-default": "^3.0.2", - "postcss-modules-scope": "^2.2.0", - "postcss-modules-values": "^3.0.0", - "postcss-value-parser": "^4.1.0", - "schema-utils": "^2.7.0", - "semver": "^6.3.0" + "icss-utils": "^5.1.0", + "postcss": "^8.4.33", + "postcss-modules-extract-imports": "^3.1.0", + "postcss-modules-local-by-default": "^4.0.5", + "postcss-modules-scope": "^3.2.0", + "postcss-modules-values": "^4.0.0", + "postcss-value-parser": "^4.2.0", + "semver": "^7.5.4" }, "engines": { - "node": ">= 8.9.0" + "node": ">= 18.12.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/webpack" }, "peerDependencies": { - "webpack": "^4.0.0 || ^5.0.0" - } - }, - "node_modules/css-loader/node_modules/camelcase": { - "version": "5.3.1", - "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz", - "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg==", - "dev": true, - "engines": { - "node": ">=6" - } - }, - "node_modules/css-loader/node_modules/json5": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz", - "integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==", - "dev": true, - "dependencies": { - "minimist": "^1.2.0" - }, - "bin": { - "json5": "lib/cli.js" - } - }, - "node_modules/css-loader/node_modules/loader-utils": { - "version": "1.4.2", - "resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.2.tgz", - "integrity": "sha512-I5d00Pd/jwMD2QCduo657+YM/6L3KZu++pmX9VFncxaxvHcru9jx1lBaFft+r4Mt2jK0Yhp41XlRAihzPxHNCg==", - "dev": true, - "dependencies": { - "big.js": "^5.2.2", - "emojis-list": "^3.0.0", - "json5": "^1.0.1" - }, - "engines": { - "node": ">=4.0.0" - } - }, - "node_modules/css-loader/node_modules/picocolors": { - "version": "0.2.1", - "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-0.2.1.tgz", - "integrity": "sha512-cMlDqaLEqfSaW8Z7N5Jw+lyIW869EzT73/F5lhtY9cLGoVxSXznfgfXMO0Z5K0o0Q2TkTXq+0KFsdnSe3jDViA==", - "dev": true - }, - "node_modules/css-loader/node_modules/postcss": { - "version": "7.0.39", - "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.39.tgz", - "integrity": "sha512-yioayjNbHn6z1/Bywyb2Y4s3yvDAeXGOyxqD+LnVOinq6Mdmd++SW2wUNVzavyyHxd6+DxzWGIuosg6P1Rj8uA==", - "dev": true, - "dependencies": { - "picocolors": "^0.2.1", - "source-map": "^0.6.1" - }, - "engines": { - "node": ">=6.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/postcss/" - } - }, - "node_modules/css-loader/node_modules/schema-utils": { - "version": "2.7.1", - "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-2.7.1.tgz", - "integrity": "sha512-SHiNtMOUGWBQJwzISiVYKu82GiV4QYGePp3odlY1tuKO7gPtphAT5R/py0fA6xtbgLL/RvtJZnU9b8s0F1q0Xg==", - "dev": true, - "dependencies": { - "@types/json-schema": "^7.0.5", - "ajv": "^6.12.4", - "ajv-keywords": "^3.5.2" - }, - "engines": { - "node": ">= 8.9.0" + "@rspack/core": "0.x || 1.x", + "webpack": "^5.27.0" }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/webpack" - } - }, - "node_modules/css-loader/node_modules/semver": { - "version": "6.3.1", - "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.1.tgz", - "integrity": "sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==", - "dev": true, - "bin": { - "semver": "bin/semver.js" - } - }, - "node_modules/css-loader/node_modules/source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true, - "engines": { - "node": ">=0.10.0" + "peerDependenciesMeta": { + "@rspack/core": { + "optional": true + }, + "webpack": { + "optional": true + } } }, "node_modules/css-minimizer-webpack-plugin": { @@ -11144,9 +10995,9 @@ "integrity": "sha512-QcDoBbQeYt0+3CWcK/rEbuHvwpbT/8SV9T3OSgs6cX1FlcUAkgrkqbg9zLnDrMM/rLamzQwal4LYFCiWk861Tg==" }, "node_modules/elliptic": { - "version": "6.5.7", - "resolved": "https://registry.npmjs.org/elliptic/-/elliptic-6.5.7.tgz", - "integrity": "sha512-ESVCtTwiA+XhY3wyh24QqRGBoP3rEdDUl3EDUUo9tft074fi19IrdpH7hLCMMP3CIj7jb3W96rn8lt/BqIlt5Q==", + "version": "6.6.1", + "resolved": "https://registry.npmjs.org/elliptic/-/elliptic-6.6.1.tgz", + "integrity": "sha512-RaddvvMatK2LJHqFJ+YA4WysVN5Ita9E35botqIYspQ4TkRAlCicdzKOjlyv/1Za5RyTNn7di//eEV0uTAfe3g==", "dependencies": { "bn.js": "^4.11.9", "brorand": "^1.1.0", @@ -11990,16 +11841,16 @@ } }, "node_modules/express": { - "version": "4.21.0", - "resolved": "https://registry.npmjs.org/express/-/express-4.21.0.tgz", - "integrity": "sha512-VqcNGcj/Id5ZT1LZ/cfihi3ttTn+NJmkli2eZADigjq29qTlWi/hAQ43t/VLPq8+UX06FCEx3ByOYet6ZFblng==", + "version": "4.21.1", + "resolved": "https://registry.npmjs.org/express/-/express-4.21.1.tgz", + "integrity": "sha512-YSFlK1Ee0/GC8QaO91tHcDxJiE/X4FbpAyQWkxAvG6AXCuR65YzK8ua6D9hvi/TzUfZMpc+BwuM1IPw8fmQBiQ==", "dependencies": { "accepts": "~1.3.8", "array-flatten": "1.1.1", "body-parser": "1.20.3", "content-disposition": "0.5.4", "content-type": "~1.0.4", - "cookie": "0.6.0", + "cookie": "0.7.1", "cookie-signature": "1.0.6", "debug": "2.6.9", "depd": "2.0.0", @@ -12169,19 +12020,6 @@ "resolved": "https://registry.npmjs.org/fast-uri/-/fast-uri-3.0.2.tgz", "integrity": "sha512-GR6f0hD7XXyNJa25Tb9BuIdN0tdr+0BMi6/CJPH3wJO1JjNG3n/VsSw38AwRdKZABm8lGbPfakLRkYzx2V9row==" }, - "node_modules/fast-url-parser": { - "version": "1.1.3", - "resolved": "https://registry.npmjs.org/fast-url-parser/-/fast-url-parser-1.1.3.tgz", - "integrity": "sha512-5jOCVXADYNuRkKFzNJ0dCCewsZiYo0dz8QNYljkOpFC6r2U4OBmKtvm/Tsuh4w1YYdDqDb31a8TVhBJ2OJKdqQ==", - "dependencies": { - "punycode": "^1.3.2" - } - }, - "node_modules/fast-url-parser/node_modules/punycode": { - "version": "1.4.1", - "resolved": "https://registry.npmjs.org/punycode/-/punycode-1.4.1.tgz", - "integrity": "sha512-jmYNElW7yvO7TV33CjSmvSiE2yco3bV2czu/OzDKdMNVZQWfxCblURLhf+47syQRBntjfLdd/H0egrzIG+oaFQ==" - }, "node_modules/fastest-stable-stringify": { "version": "2.0.2", "resolved": "https://registry.npmjs.org/fastest-stable-stringify/-/fastest-stable-stringify-2.0.2.tgz", @@ -13244,17 +13082,13 @@ "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.11.tgz", "integrity": "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA==" }, - "node_modules/hast-util-from-parse5": { - "version": "7.1.2", - "resolved": "https://registry.npmjs.org/hast-util-from-parse5/-/hast-util-from-parse5-7.1.2.tgz", - "integrity": "sha512-Nz7FfPBuljzsN3tCQ4kCBKqdNhQE2l0Tn+X1ubgKBPRoiDIu1mL08Cfw4k7q71+Duyaw7DXDN+VTAp4Vh3oCOw==", + "node_modules/hast-util-from-dom": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/hast-util-from-dom/-/hast-util-from-dom-5.0.0.tgz", + "integrity": "sha512-d6235voAp/XR3Hh5uy7aGLbM3S4KamdW0WEgOaU1YoewnuYw4HXb5eRtv9g65m/RFGEfUY1Mw4UqCc5Y8L4Stg==", "dependencies": { - "@types/hast": "^2.0.0", - "@types/unist": "^2.0.0", - "hastscript": "^7.0.0", - "property-information": "^6.0.0", - "vfile": "^5.0.0", - "vfile-location": "^4.0.0", + "@types/hast": "^3.0.0", + "hastscript": "^8.0.0", "web-namespaces": "^2.0.0" }, "funding": { @@ -13262,69 +13096,207 @@ "url": "https://opencollective.com/unified" } }, - "node_modules/hast-util-from-parse5/node_modules/@types/hast": { - "version": "2.3.10", - "resolved": "https://registry.npmjs.org/@types/hast/-/hast-2.3.10.tgz", - "integrity": "sha512-McWspRw8xx8J9HurkVBfYj0xKoE25tOFlHGdx4MJ5xORQrMGZNqJhVQWaIbm6Oyla5kYOXtDiopzKRJzEOkwJw==", + "node_modules/hast-util-from-dom/node_modules/hast-util-parse-selector": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/hast-util-parse-selector/-/hast-util-parse-selector-4.0.0.tgz", + "integrity": "sha512-wkQCkSYoOGCRKERFWcxMVMOcYE2K1AaNLU8DXS9arxnLOUEWbOXKXiJUNzEpqZ3JOKpnha3jkFrumEjVliDe7A==", "dependencies": { - "@types/unist": "^2" + "@types/hast": "^3.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" } }, - "node_modules/hast-util-from-parse5/node_modules/@types/unist": { - "version": "2.0.11", - "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.11.tgz", - "integrity": "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA==" - }, - "node_modules/hast-util-from-parse5/node_modules/unist-util-stringify-position": { - "version": "3.0.3", - "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-3.0.3.tgz", - "integrity": "sha512-k5GzIBZ/QatR8N5X2y+drfpWG8IDBzdnVj6OInRNWm1oXrzydiaAT2OQiA8DPRRZyAKb9b6I2a6PxYklZD0gKg==", + "node_modules/hast-util-from-dom/node_modules/hastscript": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/hastscript/-/hastscript-8.0.0.tgz", + "integrity": "sha512-dMOtzCEd3ABUeSIISmrETiKuyydk1w0pa+gE/uormcTpSYuaNJPbX1NU3JLyscSLjwAQM8bWMhhIlnCqnRvDTw==", "dependencies": { - "@types/unist": "^2.0.0" + "@types/hast": "^3.0.0", + "comma-separated-tokens": "^2.0.0", + "hast-util-parse-selector": "^4.0.0", + "property-information": "^6.0.0", + "space-separated-tokens": "^2.0.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/unified" } }, - "node_modules/hast-util-from-parse5/node_modules/vfile": { - "version": "5.3.7", - "resolved": "https://registry.npmjs.org/vfile/-/vfile-5.3.7.tgz", - "integrity": "sha512-r7qlzkgErKjobAmyNIkkSpizsFPYiUPuJb5pNW1RB4JcYVZhs4lIbVqk8XPk033CV/1z8ss5pkax8SuhGpcG8g==", + "node_modules/hast-util-from-html": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/hast-util-from-html/-/hast-util-from-html-2.0.3.tgz", + "integrity": "sha512-CUSRHXyKjzHov8yKsQjGOElXy/3EKpyX56ELnkHH34vDVw1N1XSQ1ZcAvTyAPtGqLTuKP/uxM+aLkSPqF/EtMw==", "dependencies": { - "@types/unist": "^2.0.0", - "is-buffer": "^2.0.0", - "unist-util-stringify-position": "^3.0.0", - "vfile-message": "^3.0.0" + "@types/hast": "^3.0.0", + "devlop": "^1.1.0", + "hast-util-from-parse5": "^8.0.0", + "parse5": "^7.0.0", + "vfile": "^6.0.0", + "vfile-message": "^4.0.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/unified" } }, - "node_modules/hast-util-from-parse5/node_modules/vfile-message": { - "version": "3.1.4", - "resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-3.1.4.tgz", - "integrity": "sha512-fa0Z6P8HUrQN4BZaX05SIVXic+7kE3b05PWAtPuYP9QLHsLKYR7/AlLW3NtOrpXRLeawpDLMsVkmk5DG0NXgWw==", + "node_modules/hast-util-from-html-isomorphic": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/hast-util-from-html-isomorphic/-/hast-util-from-html-isomorphic-2.0.0.tgz", + "integrity": "sha512-zJfpXq44yff2hmE0XmwEOzdWin5xwH+QIhMLOScpX91e/NSGPsAzNCvLQDIEPyO2TXi+lBmU6hjLIhV8MwP2kw==", "dependencies": { - "@types/unist": "^2.0.0", - "unist-util-stringify-position": "^3.0.0" + "@types/hast": "^3.0.0", + "hast-util-from-dom": "^5.0.0", + "hast-util-from-html": "^2.0.0", + "unist-util-remove-position": "^5.0.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/unified" } }, - "node_modules/hast-util-is-element": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/hast-util-is-element/-/hast-util-is-element-1.1.0.tgz", - "integrity": "sha512-oUmNua0bFbdrD/ELDSSEadRVtWZOf3iF6Lbv81naqsIV99RnSCieTbWuWCY8BAeEfKJTKl0gRdokv+dELutHGQ==", + "node_modules/hast-util-from-html/node_modules/hast-util-from-parse5": { + "version": "8.0.1", + "resolved": "https://registry.npmjs.org/hast-util-from-parse5/-/hast-util-from-parse5-8.0.1.tgz", + "integrity": "sha512-Er/Iixbc7IEa7r/XLtuG52zoqn/b3Xng/w6aZQ0xGVxzhw5xUFxcRqdPzP6yFi/4HBYRaifaI5fQ1RH8n0ZeOQ==", + "dependencies": { + "@types/hast": "^3.0.0", + "@types/unist": "^3.0.0", + "devlop": "^1.0.0", + "hastscript": "^8.0.0", + "property-information": "^6.0.0", + "vfile": "^6.0.0", + "vfile-location": "^5.0.0", + "web-namespaces": "^2.0.0" + }, "funding": { "type": "opencollective", "url": "https://opencollective.com/unified" } }, - "node_modules/hast-util-parse-selector": { + "node_modules/hast-util-from-html/node_modules/hast-util-parse-selector": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/hast-util-parse-selector/-/hast-util-parse-selector-4.0.0.tgz", + "integrity": "sha512-wkQCkSYoOGCRKERFWcxMVMOcYE2K1AaNLU8DXS9arxnLOUEWbOXKXiJUNzEpqZ3JOKpnha3jkFrumEjVliDe7A==", + "dependencies": { + "@types/hast": "^3.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-from-html/node_modules/hastscript": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/hastscript/-/hastscript-8.0.0.tgz", + "integrity": "sha512-dMOtzCEd3ABUeSIISmrETiKuyydk1w0pa+gE/uormcTpSYuaNJPbX1NU3JLyscSLjwAQM8bWMhhIlnCqnRvDTw==", + "dependencies": { + "@types/hast": "^3.0.0", + "comma-separated-tokens": "^2.0.0", + "hast-util-parse-selector": "^4.0.0", + "property-information": "^6.0.0", + "space-separated-tokens": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-from-html/node_modules/vfile-location": { + "version": "5.0.3", + "resolved": "https://registry.npmjs.org/vfile-location/-/vfile-location-5.0.3.tgz", + "integrity": "sha512-5yXvWDEgqeiYiBe1lbxYF7UMAIm/IcopxMHrMQDq3nvKcjPKIhZklUKL+AE7J7uApI4kwe2snsK+eI6UTj9EHg==", + "dependencies": { + "@types/unist": "^3.0.0", + "vfile": "^6.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-from-parse5": { + "version": "7.1.2", + "resolved": "https://registry.npmjs.org/hast-util-from-parse5/-/hast-util-from-parse5-7.1.2.tgz", + "integrity": "sha512-Nz7FfPBuljzsN3tCQ4kCBKqdNhQE2l0Tn+X1ubgKBPRoiDIu1mL08Cfw4k7q71+Duyaw7DXDN+VTAp4Vh3oCOw==", + "dependencies": { + "@types/hast": "^2.0.0", + "@types/unist": "^2.0.0", + "hastscript": "^7.0.0", + "property-information": "^6.0.0", + "vfile": "^5.0.0", + "vfile-location": "^4.0.0", + "web-namespaces": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-from-parse5/node_modules/@types/hast": { + "version": "2.3.10", + "resolved": "https://registry.npmjs.org/@types/hast/-/hast-2.3.10.tgz", + "integrity": "sha512-McWspRw8xx8J9HurkVBfYj0xKoE25tOFlHGdx4MJ5xORQrMGZNqJhVQWaIbm6Oyla5kYOXtDiopzKRJzEOkwJw==", + "dependencies": { + "@types/unist": "^2" + } + }, + "node_modules/hast-util-from-parse5/node_modules/@types/unist": { + "version": "2.0.11", + "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.11.tgz", + "integrity": "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA==" + }, + "node_modules/hast-util-from-parse5/node_modules/unist-util-stringify-position": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-3.0.3.tgz", + "integrity": "sha512-k5GzIBZ/QatR8N5X2y+drfpWG8IDBzdnVj6OInRNWm1oXrzydiaAT2OQiA8DPRRZyAKb9b6I2a6PxYklZD0gKg==", + "dependencies": { + "@types/unist": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-from-parse5/node_modules/vfile": { + "version": "5.3.7", + "resolved": "https://registry.npmjs.org/vfile/-/vfile-5.3.7.tgz", + "integrity": "sha512-r7qlzkgErKjobAmyNIkkSpizsFPYiUPuJb5pNW1RB4JcYVZhs4lIbVqk8XPk033CV/1z8ss5pkax8SuhGpcG8g==", + "dependencies": { + "@types/unist": "^2.0.0", + "is-buffer": "^2.0.0", + "unist-util-stringify-position": "^3.0.0", + "vfile-message": "^3.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-from-parse5/node_modules/vfile-message": { + "version": "3.1.4", + "resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-3.1.4.tgz", + "integrity": "sha512-fa0Z6P8HUrQN4BZaX05SIVXic+7kE3b05PWAtPuYP9QLHsLKYR7/AlLW3NtOrpXRLeawpDLMsVkmk5DG0NXgWw==", + "dependencies": { + "@types/unist": "^2.0.0", + "unist-util-stringify-position": "^3.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-is-element": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/hast-util-is-element/-/hast-util-is-element-1.1.0.tgz", + "integrity": "sha512-oUmNua0bFbdrD/ELDSSEadRVtWZOf3iF6Lbv81naqsIV99RnSCieTbWuWCY8BAeEfKJTKl0gRdokv+dELutHGQ==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-parse-selector": { "version": "3.1.1", "resolved": "https://registry.npmjs.org/hast-util-parse-selector/-/hast-util-parse-selector-3.1.1.tgz", "integrity": "sha512-jdlwBjEexy1oGz0aJ2f4GKMaVKkA9jwjr4MjAAI22E5fM/TXVZHuS5OpONtdeIkRKqAaryQ2E9xNQxijoThSZA==", @@ -13639,13 +13611,26 @@ "integrity": "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA==" }, "node_modules/hast-util-to-text": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/hast-util-to-text/-/hast-util-to-text-2.0.1.tgz", - "integrity": "sha512-8nsgCARfs6VkwH2jJU9b8LNTuR4700na+0h3PqCaEk4MAnMDeu5P0tP8mjk9LLNGxIeQRLbiDbZVw6rku+pYsQ==", + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/hast-util-to-text/-/hast-util-to-text-4.0.2.tgz", + "integrity": "sha512-KK6y/BN8lbaq654j7JgBydev7wuNMcID54lkRav1P0CaE1e47P72AWWPiGKXTJU271ooYzcvTAn/Zt0REnvc7A==", "dependencies": { - "hast-util-is-element": "^1.0.0", - "repeat-string": "^1.0.0", - "unist-util-find-after": "^3.0.0" + "@types/hast": "^3.0.0", + "@types/unist": "^3.0.0", + "hast-util-is-element": "^3.0.0", + "unist-util-find-after": "^5.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-to-text/node_modules/hast-util-is-element": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/hast-util-is-element/-/hast-util-is-element-3.0.0.tgz", + "integrity": "sha512-Val9mnv2IWpLbNPqc/pUem+a7Ipj2aHacCwgNfTiK0vJKl0LF+4Ba4+v1oPHFpf3bLYmreq0/l3Gud9S5OH42g==", + "dependencies": { + "@types/hast": "^3.0.0" }, "funding": { "type": "opencollective", @@ -13979,9 +13964,9 @@ } }, "node_modules/http-proxy-middleware": { - "version": "2.0.6", - "resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.6.tgz", - "integrity": "sha512-ya/UeJ6HVBYxrgYotAZo1KvPWlgB48kUJLDePFeneHsVujFaW5WNj2NgWCAE//B1Dl02BIfYlpNgBy8Kf8Rjmw==", + "version": "2.0.7", + "resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.7.tgz", + "integrity": "sha512-fgVY8AV7qU7z/MmXJ/rxwbrtQH4jBQ9m7kp3llF0liB7glmFeVZFBepQb32T3y8n8k2+AEYuMPCpinYW+/CuRA==", "dependencies": { "@types/http-proxy": "^1.17.8", "http-proxy": "^1.18.1", @@ -14124,47 +14109,14 @@ } }, "node_modules/icss-utils": { - "version": "4.1.1", - "resolved": "https://registry.npmjs.org/icss-utils/-/icss-utils-4.1.1.tgz", - "integrity": "sha512-4aFq7wvWyMHKgxsH8QQtGpvbASCf+eM3wPRLI6R+MgAnTCZ6STYsRvttLvRWK0Nfif5piF394St3HeJDaljGPA==", - "dev": true, - "dependencies": { - "postcss": "^7.0.14" - }, - "engines": { - "node": ">= 6" - } - }, - "node_modules/icss-utils/node_modules/picocolors": { - "version": "0.2.1", - "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-0.2.1.tgz", - "integrity": "sha512-cMlDqaLEqfSaW8Z7N5Jw+lyIW869EzT73/F5lhtY9cLGoVxSXznfgfXMO0Z5K0o0Q2TkTXq+0KFsdnSe3jDViA==", - "dev": true - }, - "node_modules/icss-utils/node_modules/postcss": { - "version": "7.0.39", - "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.39.tgz", - "integrity": "sha512-yioayjNbHn6z1/Bywyb2Y4s3yvDAeXGOyxqD+LnVOinq6Mdmd++SW2wUNVzavyyHxd6+DxzWGIuosg6P1Rj8uA==", - "dev": true, - "dependencies": { - "picocolors": "^0.2.1", - "source-map": "^0.6.1" - }, + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/icss-utils/-/icss-utils-5.1.0.tgz", + "integrity": "sha512-soFhflCVWLfRNOPU3iv5Z9VUdT44xFRbzjLsEzSr5AQmgqPMTHdU3PMT1Cf1ssx8fLNJDA1juftYl+PUcv3MqA==", "engines": { - "node": ">=6.0.0" + "node": "^10 || ^12 || >= 14" }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/postcss/" - } - }, - "node_modules/icss-utils/node_modules/source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true, - "engines": { - "node": ">=0.10.0" + "peerDependencies": { + "postcss": "^8.1.0" } }, "node_modules/ieee754": { @@ -16111,15 +16063,15 @@ } }, "node_modules/katex": { - "version": "0.13.24", - "resolved": "https://registry.npmjs.org/katex/-/katex-0.13.24.tgz", - "integrity": "sha512-jZxYuKCma3VS5UuxOx/rFV1QyGSl3Uy/i0kTJF3HgQ5xMinCQVF8Zd4bMY/9aI9b9A2pjIBOsjSSm68ykTAr8w==", + "version": "0.16.11", + "resolved": "https://registry.npmjs.org/katex/-/katex-0.16.11.tgz", + "integrity": "sha512-RQrI8rlHY92OLf3rho/Ts8i/XvjgguEjOkO1BEXcU3N8BqPpSzBNwV/G0Ukr+P/l3ivvJUE/Fa/CwbS6HesGNQ==", "funding": [ "https://opencollective.com/katex", "https://github.com/sponsors/katex" ], "dependencies": { - "commander": "^8.0.0" + "commander": "^8.3.0" }, "bin": { "katex": "cli.js" @@ -23121,181 +23073,82 @@ } }, "node_modules/postcss-modules-extract-imports": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/postcss-modules-extract-imports/-/postcss-modules-extract-imports-2.0.0.tgz", - "integrity": "sha512-LaYLDNS4SG8Q5WAWqIJgdHPJrDDr/Lv775rMBFUbgjTz6j34lUznACHcdRWroPvXANP2Vj7yNK57vp9eFqzLWQ==", - "dev": true, - "dependencies": { - "postcss": "^7.0.5" - }, + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/postcss-modules-extract-imports/-/postcss-modules-extract-imports-3.1.0.tgz", + "integrity": "sha512-k3kNe0aNFQDAZGbin48pL2VNidTF0w4/eASDsxlyspobzU3wZQLOGj7L9gfRe0Jo9/4uud09DsjFNH7winGv8Q==", "engines": { - "node": ">= 6" + "node": "^10 || ^12 || >= 14" + }, + "peerDependencies": { + "postcss": "^8.1.0" } }, - "node_modules/postcss-modules-extract-imports/node_modules/picocolors": { - "version": "0.2.1", - "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-0.2.1.tgz", - "integrity": "sha512-cMlDqaLEqfSaW8Z7N5Jw+lyIW869EzT73/F5lhtY9cLGoVxSXznfgfXMO0Z5K0o0Q2TkTXq+0KFsdnSe3jDViA==", - "dev": true - }, - "node_modules/postcss-modules-extract-imports/node_modules/postcss": { - "version": "7.0.39", - "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.39.tgz", - "integrity": "sha512-yioayjNbHn6z1/Bywyb2Y4s3yvDAeXGOyxqD+LnVOinq6Mdmd++SW2wUNVzavyyHxd6+DxzWGIuosg6P1Rj8uA==", - "dev": true, + "node_modules/postcss-modules-local-by-default": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/postcss-modules-local-by-default/-/postcss-modules-local-by-default-4.1.0.tgz", + "integrity": "sha512-rm0bdSv4jC3BDma3s9H19ZddW0aHX6EoqwDYU2IfZhRN+53QrufTRo2IdkAbRqLx4R2IYbZnbjKKxg4VN5oU9Q==", "dependencies": { - "picocolors": "^0.2.1", - "source-map": "^0.6.1" + "icss-utils": "^5.0.0", + "postcss-selector-parser": "^7.0.0", + "postcss-value-parser": "^4.1.0" }, "engines": { - "node": ">=6.0.0" + "node": "^10 || ^12 || >= 14" }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/postcss/" - } - }, - "node_modules/postcss-modules-extract-imports/node_modules/source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true, - "engines": { - "node": ">=0.10.0" + "peerDependencies": { + "postcss": "^8.1.0" } }, - "node_modules/postcss-modules-local-by-default": { - "version": "3.0.3", - "resolved": "https://registry.npmjs.org/postcss-modules-local-by-default/-/postcss-modules-local-by-default-3.0.3.tgz", - "integrity": "sha512-e3xDq+LotiGesympRlKNgaJ0PCzoUIdpH0dj47iWAui/kyTgh3CiAr1qP54uodmJhl6p9rN6BoNcdEDVJx9RDw==", - "dev": true, + "node_modules/postcss-modules-local-by-default/node_modules/postcss-selector-parser": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-7.0.0.tgz", + "integrity": "sha512-9RbEr1Y7FFfptd/1eEdntyjMwLeghW1bHX9GWjXo19vx4ytPQhANltvVxDggzJl7mnWM+dX28kb6cyS/4iQjlQ==", "dependencies": { - "icss-utils": "^4.1.1", - "postcss": "^7.0.32", - "postcss-selector-parser": "^6.0.2", - "postcss-value-parser": "^4.1.0" + "cssesc": "^3.0.0", + "util-deprecate": "^1.0.2" }, "engines": { - "node": ">= 6" + "node": ">=4" } }, - "node_modules/postcss-modules-local-by-default/node_modules/picocolors": { - "version": "0.2.1", - "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-0.2.1.tgz", - "integrity": "sha512-cMlDqaLEqfSaW8Z7N5Jw+lyIW869EzT73/F5lhtY9cLGoVxSXznfgfXMO0Z5K0o0Q2TkTXq+0KFsdnSe3jDViA==", - "dev": true - }, - "node_modules/postcss-modules-local-by-default/node_modules/postcss": { - "version": "7.0.39", - "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.39.tgz", - "integrity": "sha512-yioayjNbHn6z1/Bywyb2Y4s3yvDAeXGOyxqD+LnVOinq6Mdmd++SW2wUNVzavyyHxd6+DxzWGIuosg6P1Rj8uA==", - "dev": true, + "node_modules/postcss-modules-scope": { + "version": "3.2.1", + "resolved": "https://registry.npmjs.org/postcss-modules-scope/-/postcss-modules-scope-3.2.1.tgz", + "integrity": "sha512-m9jZstCVaqGjTAuny8MdgE88scJnCiQSlSrOWcTQgM2t32UBe+MUmFSO5t7VMSfAf/FJKImAxBav8ooCHJXCJA==", "dependencies": { - "picocolors": "^0.2.1", - "source-map": "^0.6.1" + "postcss-selector-parser": "^7.0.0" }, "engines": { - "node": ">=6.0.0" + "node": "^10 || ^12 || >= 14" }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/postcss/" - } - }, - "node_modules/postcss-modules-local-by-default/node_modules/source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true, - "engines": { - "node": ">=0.10.0" + "peerDependencies": { + "postcss": "^8.1.0" } }, - "node_modules/postcss-modules-scope": { - "version": "2.2.0", - "resolved": "https://registry.npmjs.org/postcss-modules-scope/-/postcss-modules-scope-2.2.0.tgz", - "integrity": "sha512-YyEgsTMRpNd+HmyC7H/mh3y+MeFWevy7V1evVhJWewmMbjDHIbZbOXICC2y+m1xI1UVfIT1HMW/O04Hxyu9oXQ==", - "dev": true, + "node_modules/postcss-modules-scope/node_modules/postcss-selector-parser": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-7.0.0.tgz", + "integrity": "sha512-9RbEr1Y7FFfptd/1eEdntyjMwLeghW1bHX9GWjXo19vx4ytPQhANltvVxDggzJl7mnWM+dX28kb6cyS/4iQjlQ==", "dependencies": { - "postcss": "^7.0.6", - "postcss-selector-parser": "^6.0.0" + "cssesc": "^3.0.0", + "util-deprecate": "^1.0.2" }, "engines": { - "node": ">= 6" + "node": ">=4" } }, - "node_modules/postcss-modules-scope/node_modules/picocolors": { - "version": "0.2.1", - "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-0.2.1.tgz", - "integrity": "sha512-cMlDqaLEqfSaW8Z7N5Jw+lyIW869EzT73/F5lhtY9cLGoVxSXznfgfXMO0Z5K0o0Q2TkTXq+0KFsdnSe3jDViA==", - "dev": true - }, - "node_modules/postcss-modules-scope/node_modules/postcss": { - "version": "7.0.39", - "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.39.tgz", - "integrity": "sha512-yioayjNbHn6z1/Bywyb2Y4s3yvDAeXGOyxqD+LnVOinq6Mdmd++SW2wUNVzavyyHxd6+DxzWGIuosg6P1Rj8uA==", - "dev": true, + "node_modules/postcss-modules-values": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/postcss-modules-values/-/postcss-modules-values-4.0.0.tgz", + "integrity": "sha512-RDxHkAiEGI78gS2ofyvCsu7iycRv7oqw5xMWn9iMoR0N/7mf9D50ecQqUo5BZ9Zh2vH4bCUR/ktCqbB9m8vJjQ==", "dependencies": { - "picocolors": "^0.2.1", - "source-map": "^0.6.1" + "icss-utils": "^5.0.0" }, "engines": { - "node": ">=6.0.0" + "node": "^10 || ^12 || >= 14" }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/postcss/" - } - }, - "node_modules/postcss-modules-scope/node_modules/source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true, - "engines": { - "node": ">=0.10.0" - } - }, - "node_modules/postcss-modules-values": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/postcss-modules-values/-/postcss-modules-values-3.0.0.tgz", - "integrity": "sha512-1//E5jCBrZ9DmRX+zCtmQtRSV6PV42Ix7Bzj9GbwJceduuf7IqP8MgeTXuRDHOWj2m0VzZD5+roFWDuU8RQjcg==", - "dev": true, - "dependencies": { - "icss-utils": "^4.0.0", - "postcss": "^7.0.6" - } - }, - "node_modules/postcss-modules-values/node_modules/picocolors": { - "version": "0.2.1", - "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-0.2.1.tgz", - "integrity": "sha512-cMlDqaLEqfSaW8Z7N5Jw+lyIW869EzT73/F5lhtY9cLGoVxSXznfgfXMO0Z5K0o0Q2TkTXq+0KFsdnSe3jDViA==", - "dev": true - }, - "node_modules/postcss-modules-values/node_modules/postcss": { - "version": "7.0.39", - "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.39.tgz", - "integrity": "sha512-yioayjNbHn6z1/Bywyb2Y4s3yvDAeXGOyxqD+LnVOinq6Mdmd++SW2wUNVzavyyHxd6+DxzWGIuosg6P1Rj8uA==", - "dev": true, - "dependencies": { - "picocolors": "^0.2.1", - "source-map": "^0.6.1" - }, - "engines": { - "node": ">=6.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/postcss/" - } - }, - "node_modules/postcss-modules-values/node_modules/source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true, - "engines": { - "node": ">=0.10.0" + "peerDependencies": { + "postcss": "^8.1.0" } }, "node_modules/postcss-normalize-charset": { @@ -24189,6 +24042,7 @@ "version": "4.0.2", "resolved": "https://registry.npmjs.org/raw-loader/-/raw-loader-4.0.2.tgz", "integrity": "sha512-ZnScIV3ag9A4wPX/ZayxL/jZH+euYb6FcUinPcgiQW0+UBtEv0O6Q3lGd3cqJ+GHH+rksEv3Pj99oxJ3u3VIKA==", + "dev": true, "dependencies": { "loader-utils": "^2.0.0", "schema-utils": "^3.0.0" @@ -24208,6 +24062,7 @@ "version": "3.3.0", "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-3.3.0.tgz", "integrity": "sha512-pN/yOAvcC+5rQ5nERGuwrjLlYvLTbCibnZ1I7B1LaiAz9BRBlE9GMgE/eqV30P7aJQUf7Ddimy/RsbYO/GrVGg==", + "dev": true, "dependencies": { "@types/json-schema": "^7.0.8", "ajv": "^6.12.5", @@ -24872,312 +24727,23 @@ } }, "node_modules/rehype-katex": { - "version": "5.0.0", - "resolved": "https://registry.npmjs.org/rehype-katex/-/rehype-katex-5.0.0.tgz", - "integrity": "sha512-ksSuEKCql/IiIadOHiKRMjypva9BLhuwQNascMqaoGLDVd0k2NlE2wMvgZ3rpItzRKCd6vs8s7MFbb8pcR0AEg==", - "dependencies": { - "@types/katex": "^0.11.0", - "hast-util-to-text": "^2.0.0", - "katex": "^0.13.0", - "rehype-parse": "^7.0.0", - "unified": "^9.0.0", - "unist-util-visit": "^2.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-katex/node_modules/@types/unist": { - "version": "2.0.11", - "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.11.tgz", - "integrity": "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA==" - }, - "node_modules/rehype-katex/node_modules/bail": { - "version": "1.0.5", - "resolved": "https://registry.npmjs.org/bail/-/bail-1.0.5.tgz", - "integrity": "sha512-xFbRxM1tahm08yHBP16MMjVUAvDaBMD38zsM9EMAUN61omwLmKlOpB/Zku5QkjZ8TZ4vn53pj+t518cH0S03RQ==", - "funding": { - "type": "github", - "url": "https://github.com/sponsors/wooorm" - } - }, - "node_modules/rehype-katex/node_modules/is-plain-obj": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-2.1.0.tgz", - "integrity": "sha512-YWnfyRwxL/+SsrWYfOpUtz5b3YD+nyfkHvjbcanzk8zgyO4ASD67uVMRt8k5bM4lLMDnXfriRhOpemw+NfT1eA==", - "engines": { - "node": ">=8" - } - }, - "node_modules/rehype-katex/node_modules/trough": { - "version": "1.0.5", - "resolved": "https://registry.npmjs.org/trough/-/trough-1.0.5.tgz", - "integrity": "sha512-rvuRbTarPXmMb79SmzEp8aqXNKcK+y0XaB298IXueQ8I2PsrATcPBCSPyK/dDNa2iWOhKlfNnOjdAOTBU/nkFA==", - "funding": { - "type": "github", - "url": "https://github.com/sponsors/wooorm" - } - }, - "node_modules/rehype-katex/node_modules/unified": { - "version": "9.2.2", - "resolved": "https://registry.npmjs.org/unified/-/unified-9.2.2.tgz", - "integrity": "sha512-Sg7j110mtefBD+qunSLO1lqOEKdrwBFBrR6Qd8f4uwkhWNlbkaqwHse6e7QvD3AP/MNoJdEDLaf8OxYyoWgorQ==", - "dependencies": { - "bail": "^1.0.0", - "extend": "^3.0.0", - "is-buffer": "^2.0.0", - "is-plain-obj": "^2.0.0", - "trough": "^1.0.0", - "vfile": "^4.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-katex/node_modules/unist-util-is": { - "version": "4.1.0", - "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.1.0.tgz", - "integrity": "sha512-ZOQSsnce92GrxSqlnEEseX0gi7GH9zTJZ0p9dtu87WRb/37mMPO2Ilx1s/t9vBHrFhbgweUwb+t7cIn5dxPhZg==", - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-katex/node_modules/unist-util-stringify-position": { - "version": "2.0.3", - "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-2.0.3.tgz", - "integrity": "sha512-3faScn5I+hy9VleOq/qNbAd6pAx7iH5jYBMS9I1HgQVijz/4mv5Bvw5iw1sC/90CODiKo81G/ps8AJrISn687g==", - "dependencies": { - "@types/unist": "^2.0.2" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-katex/node_modules/unist-util-visit": { - "version": "2.0.3", - "resolved": "https://registry.npmjs.org/unist-util-visit/-/unist-util-visit-2.0.3.tgz", - "integrity": "sha512-iJ4/RczbJMkD0712mGktuGpm/U4By4FfDonL7N/9tATGIF4imikjOuagyMY53tnZq3NP6BcmlrHhEKAfGWjh7Q==", - "dependencies": { - "@types/unist": "^2.0.0", - "unist-util-is": "^4.0.0", - "unist-util-visit-parents": "^3.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-katex/node_modules/unist-util-visit-parents": { - "version": "3.1.1", - "resolved": "https://registry.npmjs.org/unist-util-visit-parents/-/unist-util-visit-parents-3.1.1.tgz", - "integrity": "sha512-1KROIZWo6bcMrZEwiH2UrXDyalAa0uqzWCxCJj6lPOvTve2WkfgCytoDTPaMnodXh1WrXOq0haVYHj99ynJlsg==", - "dependencies": { - "@types/unist": "^2.0.0", - "unist-util-is": "^4.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-katex/node_modules/vfile": { - "version": "4.2.1", - "resolved": "https://registry.npmjs.org/vfile/-/vfile-4.2.1.tgz", - "integrity": "sha512-O6AE4OskCG5S1emQ/4gl8zK586RqA3srz3nfK/Viy0UPToBc5Trp9BVFb1u0CjsKrAWwnpr4ifM/KBXPWwJbCA==", - "dependencies": { - "@types/unist": "^2.0.0", - "is-buffer": "^2.0.0", - "unist-util-stringify-position": "^2.0.0", - "vfile-message": "^2.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-katex/node_modules/vfile-message": { - "version": "2.0.4", - "resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-2.0.4.tgz", - "integrity": "sha512-DjssxRGkMvifUOJre00juHoP9DPWuzjxKuMDrhNbk2TdaYYBNMStsNhEOt3idrtI12VQYM/1+iM0KOzXi4pxwQ==", - "dependencies": { - "@types/unist": "^2.0.0", - "unist-util-stringify-position": "^2.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-parse": { "version": "7.0.1", - "resolved": "https://registry.npmjs.org/rehype-parse/-/rehype-parse-7.0.1.tgz", - "integrity": "sha512-fOiR9a9xH+Le19i4fGzIEowAbwG7idy2Jzs4mOrFWBSJ0sNUgy0ev871dwWnbOo371SjgjG4pwzrbgSVrKxecw==", - "dependencies": { - "hast-util-from-parse5": "^6.0.0", - "parse5": "^6.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-parse/node_modules/@types/hast": { - "version": "2.3.10", - "resolved": "https://registry.npmjs.org/@types/hast/-/hast-2.3.10.tgz", - "integrity": "sha512-McWspRw8xx8J9HurkVBfYj0xKoE25tOFlHGdx4MJ5xORQrMGZNqJhVQWaIbm6Oyla5kYOXtDiopzKRJzEOkwJw==", - "dependencies": { - "@types/unist": "^2" - } - }, - "node_modules/rehype-parse/node_modules/@types/parse5": { - "version": "5.0.3", - "resolved": "https://registry.npmjs.org/@types/parse5/-/parse5-5.0.3.tgz", - "integrity": "sha512-kUNnecmtkunAoQ3CnjmMkzNU/gtxG8guhi+Fk2U/kOpIKjIMKnXGp4IJCgQJrXSgMsWYimYG4TGjz/UzbGEBTw==" - }, - "node_modules/rehype-parse/node_modules/@types/unist": { - "version": "2.0.11", - "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.11.tgz", - "integrity": "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA==" - }, - "node_modules/rehype-parse/node_modules/comma-separated-tokens": { - "version": "1.0.8", - "resolved": "https://registry.npmjs.org/comma-separated-tokens/-/comma-separated-tokens-1.0.8.tgz", - "integrity": "sha512-GHuDRO12Sypu2cV70d1dkA2EUmXHgntrzbpvOB+Qy+49ypNfGgFQIC2fhhXbnyrJRynDCAARsT7Ou0M6hirpfw==", - "funding": { - "type": "github", - "url": "https://github.com/sponsors/wooorm" - } - }, - "node_modules/rehype-parse/node_modules/hast-util-from-parse5": { - "version": "6.0.1", - "resolved": "https://registry.npmjs.org/hast-util-from-parse5/-/hast-util-from-parse5-6.0.1.tgz", - "integrity": "sha512-jeJUWiN5pSxW12Rh01smtVkZgZr33wBokLzKLwinYOUfSzm1Nl/c3GUGebDyOKjdsRgMvoVbV0VpAcpjF4NrJA==", - "dependencies": { - "@types/parse5": "^5.0.0", - "hastscript": "^6.0.0", - "property-information": "^5.0.0", - "vfile": "^4.0.0", - "vfile-location": "^3.2.0", - "web-namespaces": "^1.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-parse/node_modules/hast-util-parse-selector": { - "version": "2.2.5", - "resolved": "https://registry.npmjs.org/hast-util-parse-selector/-/hast-util-parse-selector-2.2.5.tgz", - "integrity": "sha512-7j6mrk/qqkSehsM92wQjdIgWM2/BW61u/53G6xmC8i1OmEdKLHbk419QKQUjz6LglWsfqoiHmyMRkP1BGjecNQ==", - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-parse/node_modules/hastscript": { - "version": "6.0.0", - "resolved": "https://registry.npmjs.org/hastscript/-/hastscript-6.0.0.tgz", - "integrity": "sha512-nDM6bvd7lIqDUiYEiu5Sl/+6ReP0BMk/2f4U/Rooccxkj0P5nm+acM5PrGJ/t5I8qPGiqZSE6hVAwZEdZIvP4w==", - "dependencies": { - "@types/hast": "^2.0.0", - "comma-separated-tokens": "^1.0.0", - "hast-util-parse-selector": "^2.0.0", - "property-information": "^5.0.0", - "space-separated-tokens": "^1.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-parse/node_modules/parse5": { - "version": "6.0.1", - "resolved": "https://registry.npmjs.org/parse5/-/parse5-6.0.1.tgz", - "integrity": "sha512-Ofn/CTFzRGTTxwpNEs9PP93gXShHcTq255nzRYSKe8AkVpZY7e1fpmTfOyoIvjP5HG7Z2ZM7VS9PPhQGW2pOpw==" - }, - "node_modules/rehype-parse/node_modules/property-information": { - "version": "5.6.0", - "resolved": "https://registry.npmjs.org/property-information/-/property-information-5.6.0.tgz", - "integrity": "sha512-YUHSPk+A30YPv+0Qf8i9Mbfe/C0hdPXk1s1jPVToV8pk8BQtpw10ct89Eo7OWkutrwqvT0eicAxlOg3dOAu8JA==", - "dependencies": { - "xtend": "^4.0.0" - }, - "funding": { - "type": "github", - "url": "https://github.com/sponsors/wooorm" - } - }, - "node_modules/rehype-parse/node_modules/space-separated-tokens": { - "version": "1.1.5", - "resolved": "https://registry.npmjs.org/space-separated-tokens/-/space-separated-tokens-1.1.5.tgz", - "integrity": "sha512-q/JSVd1Lptzhf5bkYm4ob4iWPjx0KiRe3sRFBNrVqbJkFaBm5vbbowy1mymoPNLRa52+oadOhJ+K49wsSeSjTA==", - "funding": { - "type": "github", - "url": "https://github.com/sponsors/wooorm" - } - }, - "node_modules/rehype-parse/node_modules/unist-util-stringify-position": { - "version": "2.0.3", - "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-2.0.3.tgz", - "integrity": "sha512-3faScn5I+hy9VleOq/qNbAd6pAx7iH5jYBMS9I1HgQVijz/4mv5Bvw5iw1sC/90CODiKo81G/ps8AJrISn687g==", - "dependencies": { - "@types/unist": "^2.0.2" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-parse/node_modules/vfile": { - "version": "4.2.1", - "resolved": "https://registry.npmjs.org/vfile/-/vfile-4.2.1.tgz", - "integrity": "sha512-O6AE4OskCG5S1emQ/4gl8zK586RqA3srz3nfK/Viy0UPToBc5Trp9BVFb1u0CjsKrAWwnpr4ifM/KBXPWwJbCA==", + "resolved": "https://registry.npmjs.org/rehype-katex/-/rehype-katex-7.0.1.tgz", + "integrity": "sha512-OiM2wrZ/wuhKkigASodFoo8wimG3H12LWQaH8qSPVJn9apWKFSH3YOCtbKpBorTVw/eI7cuT21XBbvwEswbIOA==", "dependencies": { - "@types/unist": "^2.0.0", - "is-buffer": "^2.0.0", - "unist-util-stringify-position": "^2.0.0", - "vfile-message": "^2.0.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-parse/node_modules/vfile-location": { - "version": "3.2.0", - "resolved": "https://registry.npmjs.org/vfile-location/-/vfile-location-3.2.0.tgz", - "integrity": "sha512-aLEIZKv/oxuCDZ8lkJGhuhztf/BW4M+iHdCwglA/eWc+vtuRFJj8EtgceYFX4LRjOhCAAiNHsKGssC6onJ+jbA==", - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, - "node_modules/rehype-parse/node_modules/vfile-message": { - "version": "2.0.4", - "resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-2.0.4.tgz", - "integrity": "sha512-DjssxRGkMvifUOJre00juHoP9DPWuzjxKuMDrhNbk2TdaYYBNMStsNhEOt3idrtI12VQYM/1+iM0KOzXi4pxwQ==", - "dependencies": { - "@types/unist": "^2.0.0", - "unist-util-stringify-position": "^2.0.0" + "@types/hast": "^3.0.0", + "@types/katex": "^0.16.0", + "hast-util-from-html-isomorphic": "^2.0.0", + "hast-util-to-text": "^4.0.0", + "katex": "^0.16.0", + "unist-util-visit-parents": "^6.0.0", + "vfile": "^6.0.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/unified" } }, - "node_modules/rehype-parse/node_modules/web-namespaces": { - "version": "1.1.4", - "resolved": "https://registry.npmjs.org/web-namespaces/-/web-namespaces-1.1.4.tgz", - "integrity": "sha512-wYxSGajtmoP4WxfejAPIr4l0fVh+jeMXZb08wNc0tMg6xsfZXj3cECqIK0G7ZAqUq0PP8WlMDtaOGVBTAWztNw==", - "funding": { - "type": "github", - "url": "https://github.com/sponsors/wooorm" - } - }, "node_modules/rehype-raw": { "version": "7.0.0", "resolved": "https://registry.npmjs.org/rehype-raw/-/rehype-raw-7.0.0.tgz", @@ -26405,17 +25971,16 @@ } }, "node_modules/serve-handler": { - "version": "6.1.5", - "resolved": "https://registry.npmjs.org/serve-handler/-/serve-handler-6.1.5.tgz", - "integrity": "sha512-ijPFle6Hwe8zfmBxJdE+5fta53fdIY0lHISJvuikXB3VYFafRjMRpOffSPvCYsbKyBA7pvy9oYr/BT1O3EArlg==", + "version": "6.1.6", + "resolved": "https://registry.npmjs.org/serve-handler/-/serve-handler-6.1.6.tgz", + "integrity": "sha512-x5RL9Y2p5+Sh3D38Fh9i/iQ5ZK+e4xuXRd/pGbM4D13tgo/MGwbttUk8emytcr1YYzBYs+apnUngBDFYfpjPuQ==", "dependencies": { "bytes": "3.0.0", "content-disposition": "0.5.2", - "fast-url-parser": "1.1.3", "mime-types": "2.1.18", "minimatch": "3.1.2", "path-is-inside": "1.0.2", - "path-to-regexp": "2.2.1", + "path-to-regexp": "3.3.0", "range-parser": "1.2.0" } }, @@ -26439,9 +26004,9 @@ } }, "node_modules/serve-handler/node_modules/path-to-regexp": { - "version": "2.2.1", - "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-2.2.1.tgz", - "integrity": "sha512-gu9bD6Ta5bwGrrU8muHzVOBFFREpp2iRkVfhBJahwJ6p6Xw20SjT0MxLnwkjOibQmGSYhiUnf2FLe7k+jcFmGQ==" + "version": "3.3.0", + "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-3.3.0.tgz", + "integrity": "sha512-qyCH421YQPS2WFDxDjftfc1ZR5WKQzVzqsp4n9M2kQhVOo/ByahFoUNJfl58kOcEGfQ//7weFTDhm+ss8Ecxgw==" }, "node_modules/serve-index": { "version": "1.9.1", @@ -28345,26 +27910,18 @@ "integrity": "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA==" }, "node_modules/unist-util-find-after": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/unist-util-find-after/-/unist-util-find-after-3.0.0.tgz", - "integrity": "sha512-ojlBqfsBftYXExNu3+hHLfJQ/X1jYY/9vdm4yZWjIbf0VuWF6CRufci1ZyoD/wV2TYMKxXUoNuoqwy+CkgzAiQ==", + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/unist-util-find-after/-/unist-util-find-after-5.0.0.tgz", + "integrity": "sha512-amQa0Ep2m6hE2g72AugUItjbuM8X8cGQnFoHk0pGfrFeT9GZhzN5SW8nRsiGKK7Aif4CrACPENkA6P/Lw6fHGQ==", "dependencies": { - "unist-util-is": "^4.0.0" + "@types/unist": "^3.0.0", + "unist-util-is": "^6.0.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/unified" } }, - "node_modules/unist-util-find-after/node_modules/unist-util-is": { - "version": "4.1.0", - "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.1.0.tgz", - "integrity": "sha512-ZOQSsnce92GrxSqlnEEseX0gi7GH9zTJZ0p9dtu87WRb/37mMPO2Ilx1s/t9vBHrFhbgweUwb+t7cIn5dxPhZg==", - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/unified" - } - }, "node_modules/unist-util-generated": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/unist-util-generated/-/unist-util-generated-2.0.1.tgz", @@ -28415,6 +27972,19 @@ "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.11.tgz", "integrity": "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA==" }, + "node_modules/unist-util-remove-position": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/unist-util-remove-position/-/unist-util-remove-position-5.0.0.tgz", + "integrity": "sha512-Hp5Kh3wLxv0PHj9m2yZhhLt58KzPtEYKQQ4yxfYFEO7EvHwzyDYnduhHnY1mDxoqr7VUwVuHXk9RXKIiYS1N8Q==", + "dependencies": { + "@types/unist": "^3.0.0", + "unist-util-visit": "^5.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, "node_modules/unist-util-select": { "version": "4.0.3", "resolved": "https://registry.npmjs.org/unist-util-select/-/unist-util-select-4.0.3.tgz", diff --git a/website/package.json b/website/package.json index a16c8f9db9b..73cca4c63e3 100644 --- a/website/package.json +++ b/website/package.json @@ -16,9 +16,9 @@ "@docusaurus/theme-search-algolia": "3.4.0", "@mdx-js/react": "^3.0.1", "@monaco-editor/react": "^4.4.6", - "@stoplight/elements": "^7.7.17", + "@stoplight/elements": "^7.5.8", "@svgr/webpack": "^6.0.0", - "axios": "^0.27.2", + "axios": "^1.7.7", "canvas-confetti": "^1.9.2", "classnames": "^2.3.1", "clsx": "^1.1.1", @@ -36,7 +36,6 @@ "papaparse": "^5.3.2", "prism-react-renderer": "^2.3.1", "query-string": "^8.1.0", - "raw-loader": "^4.0.2", "react": "^18.2.0", "react-dom": "^18.2.0", "react-full-screen": "^1.1.1", @@ -44,7 +43,7 @@ "react-select": "^5.7.5", "react-tooltip": "^4.2.21", "redoc": "^2.0.0-rc.57", - "rehype-katex": "^5.0.0", + "rehype-katex": "^7.0.1", "remark-math": "^3.0.1", "sanitize-html": "^2.8.0", "slugify": "^1.6.1", @@ -64,7 +63,7 @@ "@testing-library/user-event": "^14.5.2", "@typescript-eslint/eslint-plugin": "^5.54.0", "@typescript-eslint/parser": "^5.54.0", - "css-loader": "^3.4.2", + "css-loader": "^7.1.2", "cypress": "^13.11.0", "dotenv": "^10.0.0", "eslint": "^8.35.0", @@ -77,6 +76,7 @@ "lint-staged": "^13.1.2", "path-browserify": "^1.0.1", "process": "^0.11.10", + "raw-loader": "^4.0.2", "stream-http": "^3.2.0", "style-loader": "^1.1.3", "svg-inline-loader": "^0.8.2", diff --git a/website/sidebars.js b/website/sidebars.js index b5134479837..04afb7c0c99 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -926,6 +926,7 @@ const sidebarSettings = { "reference/resource-configs/alias", "reference/resource-configs/database", "reference/resource-configs/enabled", + "reference/resource-configs/event-time", "reference/resource-configs/full_refresh", "reference/resource-configs/contract", "reference/resource-configs/grants", @@ -978,6 +979,7 @@ const sidebarSettings = { "reference/resource-configs/updated_at", "reference/resource-configs/invalidate_hard_deletes", "reference/resource-configs/snapshot_meta_column_names", + "reference/resource-configs/dbt_valid_to_current", ], }, { diff --git a/website/src/css/custom.css b/website/src/css/custom.css index e240a5dfabf..b8979ffc943 100644 --- a/website/src/css/custom.css +++ b/website/src/css/custom.css @@ -2112,3 +2112,35 @@ h2.anchor.clicked a.hash-link:before { flex-direction: column; } } + +.markdown table th, +.markdown table td { + padding: 8px; + border: 1px solid var(--table-border-color); + word-wrap: break-word; + white-space: normal; + text-align: left; +} + +table th { + background-color: #ED7254; /* Table header background color */ +} + +:root { + --table-border-color: #000000; /* Light mode table border color */ +} + +/* Dark mode border */ +[data-theme="dark"] { + --table-border-color: #ddd; /* Dark mode table border color */ +} +table th { + color: #ffffff; /* White text on lighter background */ + font-weight: bold; +} + +/* Dark mode table header text */ +[data-theme='dark'] table th { + color: #000000; /* Black text on darker background */ + font-weight: bold; +} \ No newline at end of file diff --git a/website/static/img/adapter-guide/0-full-release-notes.png b/website/static/img/adapter-guide/0-full-release-notes.png index 6cb9f0ae8ed..284343ff955 100644 Binary files a/website/static/img/adapter-guide/0-full-release-notes.png and b/website/static/img/adapter-guide/0-full-release-notes.png differ diff --git a/website/static/img/adapter-guide/1-announcement.png b/website/static/img/adapter-guide/1-announcement.png index 587fee769d3..90bc965278f 100644 Binary files a/website/static/img/adapter-guide/1-announcement.png and b/website/static/img/adapter-guide/1-announcement.png differ diff --git a/website/static/img/adapter-guide/2-short-description.png b/website/static/img/adapter-guide/2-short-description.png index 167457cbcf3..16c128f94c8 100644 Binary files a/website/static/img/adapter-guide/2-short-description.png and b/website/static/img/adapter-guide/2-short-description.png differ diff --git a/website/static/img/adapter-guide/3-additional-resources.png b/website/static/img/adapter-guide/3-additional-resources.png index ba52aed613e..715978a119c 100644 Binary files a/website/static/img/adapter-guide/3-additional-resources.png and b/website/static/img/adapter-guide/3-additional-resources.png differ diff --git a/website/static/img/adapter-guide/4-installation.png b/website/static/img/adapter-guide/4-installation.png index d075b3c0569..80ced6e75dc 100644 Binary files a/website/static/img/adapter-guide/4-installation.png and b/website/static/img/adapter-guide/4-installation.png differ diff --git a/website/static/img/adapter-guide/6-thank-contribs.png b/website/static/img/adapter-guide/6-thank-contribs.png index 289d67ea5b3..815f6235c70 100644 Binary files a/website/static/img/adapter-guide/6-thank-contribs.png and b/website/static/img/adapter-guide/6-thank-contribs.png differ diff --git a/website/static/img/blog/2024-05-07-unit-testing/unit-test-terminal-output.png b/website/static/img/blog/2024-05-07-unit-testing/unit-test-terminal-output.png new file mode 100644 index 00000000000..9e68587fa61 Binary files /dev/null and b/website/static/img/blog/2024-05-07-unit-testing/unit-test-terminal-output.png differ diff --git a/website/static/img/docs/dbt-cloud/Navigate To Account Settings.png b/website/static/img/docs/dbt-cloud/Navigate To Account Settings.png index 08848fe39b1..cd4792b5c34 100644 Binary files a/website/static/img/docs/dbt-cloud/Navigate To Account Settings.png and b/website/static/img/docs/dbt-cloud/Navigate To Account Settings.png differ diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/connecting-github/github-connect-1.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/connecting-github/github-connect-1.png new file mode 100644 index 00000000000..31becd8c453 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/connecting-github/github-connect-1.png differ diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/connecting-github/github-connect.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/connecting-github/github-connect.png new file mode 100644 index 00000000000..18869ab426f Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/connecting-github/github-connect.png differ diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/delete-environment.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/delete-environment.png new file mode 100644 index 00000000000..58225b53a57 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/delete-environment.png differ diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/delete-job.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/delete-job.png new file mode 100644 index 00000000000..c8817e08898 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/delete-job.png differ diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/dev-environment-custom-branch.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/dev-environment-custom-branch.png index 2ccf3ff9e76..ca2d0cd4e8e 100644 Binary files a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/dev-environment-custom-branch.png and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/dev-environment-custom-branch.png differ diff --git a/website/static/img/docs/dbt-cloud/cloud-ide/gitignore-italics.png b/website/static/img/docs/dbt-cloud/cloud-ide/gitignore-italics.png new file mode 100644 index 00000000000..943bbcfdb3f Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-ide/gitignore-italics.png differ diff --git a/website/static/img/docs/dbt-cloud/delete_projects_from_dbt_cloud.png b/website/static/img/docs/dbt-cloud/delete_projects_from_dbt_cloud.png new file mode 100644 index 00000000000..c3a47797e84 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/delete_projects_from_dbt_cloud.png differ diff --git a/website/static/img/docs/dbt-cloud/delete_user.png b/website/static/img/docs/dbt-cloud/delete_user.png new file mode 100644 index 00000000000..e767af673d8 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/delete_user.png differ diff --git a/website/static/img/docs/dbt-cloud/disconnect-repo.png b/website/static/img/docs/dbt-cloud/disconnect-repo.png new file mode 100644 index 00000000000..084bea9cfd7 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/disconnect-repo.png differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/navigate-to-env-vars.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/navigate-to-env-vars.png new file mode 100644 index 00000000000..fc72778ff33 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/navigate-to-env-vars.png differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/data-sources.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/data-sources.png index be7a96f7177..8119f404742 100644 Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/data-sources.png and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/data-sources.png differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/documentation-job-execution-settings.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/documentation-job-execution-settings.png index 845e1fcf7a7..0886f82dc0c 100644 Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/documentation-job-execution-settings.png and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/documentation-job-execution-settings.png differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/documentation-project-details.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/documentation-project-details.png index 6c5e845284d..7aae09edc14 100644 Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/documentation-project-details.png and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/documentation-project-details.png differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/jobs-settings-target-name.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/jobs-settings-target-name.png index cdaaef68ed1..3249a01c0db 100644 Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/jobs-settings-target-name.png and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/jobs-settings-target-name.png differ diff --git a/website/static/img/docs/deploy/apples_to_apples.png b/website/static/img/docs/deploy/apples_to_apples.png new file mode 100644 index 00000000000..b1216b6eeb2 Binary files /dev/null and b/website/static/img/docs/deploy/apples_to_apples.png differ diff --git a/website/static/img/docs/deploy/dbt-compare.jpg b/website/static/img/docs/deploy/dbt-compare.jpg new file mode 100644 index 00000000000..a7f27d31efa Binary files /dev/null and b/website/static/img/docs/deploy/dbt-compare.jpg differ diff --git a/website/static/img/guides/dbt-ecosystem/dbt-python-snowpark/5-development-schema-name/1-settings-gear-icon.png b/website/static/img/guides/dbt-ecosystem/dbt-python-snowpark/5-development-schema-name/1-settings-gear-icon.png index c23cc053998..941ac76c093 100644 Binary files a/website/static/img/guides/dbt-ecosystem/dbt-python-snowpark/5-development-schema-name/1-settings-gear-icon.png and b/website/static/img/guides/dbt-ecosystem/dbt-python-snowpark/5-development-schema-name/1-settings-gear-icon.png differ