Skip to content

Commit

Permalink
Add example of server_side_parameters (#3622)
Browse files Browse the repository at this point in the history
## What are you changing in this pull request and why?

I think it would be nice to show an example of the use of
`server_side_parameters`.
<!---
Describe your changes and why you're making them. If linked to an open
issue or a pull request on dbt Core, then link to them here! 

To learn more about the writing conventions used in the dbt Labs docs,
see the [Content style
guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md).
-->

## Checklist
<!--
Uncomment if you're publishing docs for a prerelease version of dbt
(delete if not applicable):
- [x] Add versioning components, as described in [Versioning
Docs](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-entire-pages)
- [x] Add a note to the prerelease version [Migration
Guide](https://github.com/dbt-labs/docs.getdbt.com/tree/current/website/docs/guides/migration/versions)
-->
- [x] Review the [Content style
guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md)
and [About
versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version)
so my content adheres to these guidelines.
- [x] Add a checklist item for anything that needs to happen before this
PR is merged, such as "needs technical review" or "change base branch."
  • Loading branch information
dataders authored Aug 17, 2023
2 parents d2d3a81 + ac797f6 commit 31a5068
Showing 1 changed file with 12 additions and 3 deletions.
15 changes: 12 additions & 3 deletions website/docs/docs/core/connect-data-platform/spark-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,9 +119,7 @@ your_profile_name:
port: [port] # default 443
user: [user]
server_side_parameters:
# cluster configuration parameters, otherwise applied via `SET` statements
# for example:
# "spark.databricks.delta.schema.autoMerge.enabled": True
"spark.driver.memory": "4g"
```
</File>
Expand All @@ -148,6 +146,8 @@ your_profile_name:
auth: [e.g. KERBEROS]
kerberos_service_name: [e.g. hive]
use_ssl: [true|false] # value of hive.server2.use.SSL, default false
server_side_parameters:
"spark.driver.memory": "4g"
```

</File>
Expand Down Expand Up @@ -176,6 +176,8 @@ your_profile_name:
user: [user]
connect_timeout: 60 # default 10
connect_retries: 5 # default 0
server_side_parameters:
"spark.driver.memory": "4g"
```

</File>
Expand All @@ -201,6 +203,8 @@ your_profile_name:
method: session
schema: [database/schema name]
host: NA # not used, but required by `dbt-core`
server_side_parameters:
"spark.driver.memory": "4g"
```
</File>
Expand Down Expand Up @@ -229,6 +233,11 @@ connect_retries: 3

</VersionBlock>

<VersionBlock firstVersion="1.7">
### Server side configuration

Spark can be customized using [Application Properties](https://spark.apache.org/docs/latest/configuration.html). Using these properties the execution can be customized, for example, to allocate more memory to the driver process. Also, the Spark SQL runtime can be set through these properties. For example, this allows the user to [set a Spark catalogs](https://spark.apache.org/docs/latest/configuration.html#spark-sql).
</VersionBlock>
## Caveats

### Usage with EMR
Expand Down

0 comments on commit 31a5068

Please sign in to comment.