Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ADAP-432] [Feature] Snowflake Adapter supports INSERT OVERWRITE #550

Open
3 tasks done
markproctor1 opened this issue Apr 4, 2023 · 26 comments
Open
3 tasks done
Labels
enhancement New feature or request help_wanted Extra attention is needed

Comments

@markproctor1
Copy link

markproctor1 commented Apr 4, 2023

Is this your first time submitting a feature request?

  • I have read the expectations for open source contributors
  • I have searched the existing issues, and I could not find an existing issue for this feature
  • I am requesting a straightforward extension of existing dbt-snowflake functionality, rather than a Big Idea better suited to a discussion

Describe the feature

Snowflake has an optional parameter for inserts which does TRUNCATE behavior before INSERT. It's one transaction and avoids dropping or recreating tables.

https://docs.snowflake.com/en/sql-reference/sql/insert#optional-parameters
INSERT [ OVERWRITE ] INTO <target_table>

Support this feature in dbt

Describe alternatives you've considered

There is no single-transaction alternative feature to TRUNCATE / DELETE * and then INSERT. This is documented by Snowflake.

Who will this benefit?

Anyone who needs to avoid DROP/RECREATE and at the same time wants to full load will benefit, with the added bonus of keeping the behavior single-transaction.

In our use case, we don't want tables to be dropped as the table has permissions and policies which need to be kept in place despite which process is doing the load. At the same time, the rows of the table are completely replaced during load.

Are you interested in contributing this feature?

Yes, I need to understand materializations better but in my naivete this seems like a simple and valuable feature

Anything else?

Matt Winkler already built this:
(https://github.com/matt-winkler/dbt_workspace/blob/master/dbt/macros/materializations/snowflake_custom_incremental/insert_overwrite.sql)](https://github.com/matt-winkler/dbt_workspace/tree/master/dbt/macros/materializations/snowflake_custom_incremental)

@markproctor1 markproctor1 added enhancement New feature or request triage labels Apr 4, 2023
@github-actions github-actions bot changed the title [Feature] Snowflake Adapter supports INSERT OVERWRITE [ADAP-432] [Feature] Snowflake Adapter supports INSERT OVERWRITE Apr 4, 2023
@Fleid
Copy link
Contributor

Fleid commented Apr 5, 2023

Hey there!

I'm still trying to understand the scope of work here.

Is the ask to add a model config applicable to table and incremental only, to indicate that CREATE OR REPLACE should be avoided in favor of INSERT OVERWRITE?

This would mean that normal runs for table, and full-refresh runs for both table and incremental would be impacted?
Is that a fair description of the requirements?

@Fleid Fleid self-assigned this Apr 5, 2023
@Fleid Fleid added awaiting_response enhancement New feature or request and removed enhancement New feature or request triage labels Apr 5, 2023
@adammarples
Copy link

My understanding is that this would be best as a new incremental strategy

incremental_strategy: insert+overwrite

+1 for this feature!

@Fleid
Copy link
Contributor

Fleid commented Apr 6, 2023

Hi @adammarples - can you please tell me more?

The adapters where we support an insert_overwrite strategy for incremental, we do because the overwrite is done at the partition level. We only drop and regenerate the partitions where data has actually change.
In Snowflake this is not possible, as the entire table is truncated with INSERT OVERWRITE. So to me it's not incremental anymore? Am I missing something?

@markproctor1
Copy link
Author

@Fleid It would be better named if it weren't listed as incremental, but some of the other behaviors of table materialization such as potentially dropping / replacing the table are to be avoided for INSERT OVERWRITE.

@Fleid
Copy link
Contributor

Fleid commented Apr 6, 2023

@markproctor1 I think we agree here. There is value, but not limited to incremental models. So it's not an incremental strategy, but rather an overall switch to make all drop/replace operations replaced by INSERT OVERWRITE.

@markproctor1
Copy link
Author

@Fleid What goes into the decision to make it a switch rather than an optional part of table materialization? Someone out there probably LOVES the fact that their tables get dropped :)

@Fleid
Copy link
Contributor

Fleid commented Apr 6, 2023

I think I covered the fact that it was not a strategy for incremental models.

Could it be a materialization by itself? Materializations are about the type of object being created in the database. We have views, tables, CTEs (ephemerals), and soon materialized views. The only exception is incremental models, which do generate tables, but needed to be clearly delineated from tables as the handling mechanism is widely different. This is me post-rationalizing, because I wasn't there for that decision, but I'm glad it's not a switch on tables.

Is using INSERT OVERWRITE deserving of its own materialization? It's still resulting in a table, and the ergonomics of using it, from the dbt user standpoint is minimal. Yes I want it, or no. That's why I'm seeing it as an option on table and incremental.

@markproctor1
Copy link
Author

This is intuitive to me:
{{ config(materialized='table', insert_overwrite=true) }}

However, "insert overwrite" as a concept can be full load or incremental:

https://docs.getdbt.com/reference/resource-configs/glue-configs#the-insert_overwrite-strategy

If no partition_by is specified, then the insert_overwrite strategy will atomically replace all contents of the table, overriding all existing data with only the new records. The column schema of the table remains the same, however. This can be desirable in some limited circumstances, since it minimizes downtime while the table contents are overwritten. The operation is comparable to running truncate + insert on other databases. For atomic replacement of Delta-formatted tables, use the table materialization (which runs create or replace) instead.

So there's concept disagreement, the term can mean both table and incremental outside of dbt. In Snowflake, it means materialized=table.

@github-actions github-actions bot added the triage label Apr 7, 2023
@Fleid
Copy link
Contributor

Fleid commented Apr 7, 2023

Since you already have some code there, would you be interested in contributing a PR?
I'll flag this issue as help_wanted if you do ;)

@adammarples
Copy link

@Fleid yes my mistake, it would be a table materialization, not an incremental strategy

@markproctor1
Copy link
Author

@Fleid I'm trying to decide when to fit this in, as sprint work or on the side. I am interested, will get back to you asap.

@jtcohen6
Copy link
Contributor

This is probably how seeds should work, on Snowflake, instead of truncate + delete. It would be a way of resolving #112 / #135.

This makes less sense to me for incremental models (since it always replaces the full contents, unless partition-based insert overwrite on other platforms) and for table models (since it wouldn't work if the columns of the table have changed between runs).

@markproctor1
Copy link
Author

@jtcohen6 Your comment makes sense about seeds, but I don't know enough to say whether it should be in scope here.

Regarding table models: Do you expect materialized='table', overwrite=true to succeed if the columns have since changed? In our use case, if we want to INSERT OVERWRITE we don't expect column changes. If we expected column changes and wanted full load behavior, we'd aim for create/replace or something else.

What is the dbtonic of resolving target table changes not handled by the specified materialization? If it were incremental and the target object is no longer compatible it would just error right?

@Fleid
Copy link
Contributor

Fleid commented Apr 17, 2023

Like discussed, I would not be clever. If we limit the scope of overwrite to simple runs for tables, then INSERT OVERWRITE is an optimization that you should only use when the model is stable. If you get an error from Snowflake because your model changed, you need to go --full-refresh to resolve it. That sounds fair to me.

@markproctor1
Copy link
Author

@Fleid Our teams are discussing this again. We're finding the benefit of INSERT OVERWRITE in our custom materialization isn't worth the extra dev/maintenance time through table schema changes. So as you say, it would be best to have --full-refresh.

@dataders I'm not sure this is incremental, in the sense of incremental materialization, as we've discussed it.

@dataders
Copy link
Contributor

We're finding the benefit of INSERT OVERWRITE in our custom materialization isn't worth the extra dev/maintenance time through table schema changes. So as you say, it would be best to have --full-refresh.

@markproctor1 my read here is that you and your team no longer think that inclusion of Snowflake's INSERT OVERWRITE within dbt would be a useful addition?

@markproctor1
Copy link
Author

@dataders No, I was unclear! We want it more than ever but think it should be handled as a table materialization instead of an incremental materialization.

@dataders
Copy link
Contributor

@markproctor1 do you care to open a PR that modifies the existing table materialization? I think that'd make this discussion much more focused.

@Fleid Fleid removed their assignment Feb 14, 2024
@umarh-ccy
Copy link

Hello, is this still in progress or are there any plans for this? key feature here is using the underlying time travel feature which gets lost when we drop/replace tables.

@james-mcp-cdi
Copy link

Checking on the status of this. Did the INSERT OVERWITE concept get moved to a different PR?
We want to retain time travel on our mart-level objects.

@markproctor1
Copy link
Author

I don't know whether it moved to a different PR; I ran into some design decisions over the months.

Snowflake Insert Overwrite

  • It doesn't cleanly fit into either full load or incremental
    • The rows are full load, but there shouldn't be drop / replace table behavior
  • The terminology for "insert overwrite" in Snowflake is not universal, it won't match other adapter terminology in dbt
    • Doesn't match BigQuery for example
  • When the table column datatype don't match between dbt definition and existing target, how to handle the change is complex
    (One reason we use INSERT OVERWRITE is so that columns and tables aren't dropped.)

We still use Matt Winkler's custom materialization for all our most important work, but I don't know how to design this.
@dataders @matt-winkler @james-mcp-cdi

@ian-andriot-cdi
Copy link

ian-andriot-cdi commented Sep 11, 2024

Could this be an incremental strategy within the incremental materialization instead of an option in table? Might be a simple way to implement this? There's even schema change logic already a part of the incremental materialization.

Edit: sorry I saw this was discussed somewhat. In terms of it needing to be a table or incremental. What's the difference between it as a new incremental strategy versus a table option with full refresh set to false?

@sanjayan1995
Copy link

Is this issue still open ? Do we have a workaround for INSERT OVERWRITE in dbt

@JHYL0406
Copy link

JHYL0406 commented Dec 4, 2024

I also support this change whichever way it's implemented
We have tables in snowflake with tags on the columns for data masking policies
We are currently using custom materialisation to truncate the table and then refreshing the data into the table so that the tags are never removed
if we use normal table materialisation then it loses the tags and the masking is removed until the tags are reapplied again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help_wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

10 participants