Skip to content

Commit

Permalink
Review feedback on Getting Started Spark docs
Browse files Browse the repository at this point in the history
  • Loading branch information
nickdelnano committed Jan 10, 2025
1 parent 3c97949 commit ea5951c
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion docs/docs/spark-getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,12 +83,16 @@ Iceberg also adds row-level SQL updates to Spark, [`MERGE INTO`](spark-writes.md

```sql
CREATE TABLE local.db.updates (id bigint, data string) USING iceberg;

INSERT INTO local.db.updates VALUES (1, 'x'), (2, 'y'), (4, 'z');

MERGE INTO local.db.table t
USING (SELECT * FROM local.db.updates) u ON t.id = u.id
WHEN MATCHED THEN UPDATE SET t.data = u.data
WHEN NOT MATCHED THEN INSERT *;

-- ((1, 'x'), (2, 'y'), (3, 'c'), (4, 'z'))
SELECT * FROM local.db.table;
```

Iceberg supports writing DataFrames using the new [v2 DataFrame write API](spark-writes.md#writing-with-dataframes):
Expand Down Expand Up @@ -163,7 +167,7 @@ This type conversion table describes how Spark types are converted to the Iceber
| map | map | |

!!! info
The table is based on type conversions during table creation. Broader type conversions are applied on write:
Broader type conversions are applied on write:

* Iceberg numeric types (`integer`, `long`, `float`, `double`, `decimal`) support promotion during writes. e.g. You can write Spark types `short`, `byte`, `integer`, `long` to Iceberg type `long`.
* You can write to Iceberg `fixed` type using Spark `binary` type. Note that assertion on the length will be performed.
Expand Down

0 comments on commit ea5951c

Please sign in to comment.