Skip to content

Commit

Permalink
Merge pull request #8945 from GoogleCloudPlatform/main_sync (#530)
Browse files Browse the repository at this point in the history
Signed-off-by: Modular Magician <[email protected]>
  • Loading branch information
modular-magician authored Sep 14, 2023
1 parent 01febbd commit fcbafa6
Show file tree
Hide file tree
Showing 48 changed files with 1,182 additions and 12 deletions.
15 changes: 15 additions & 0 deletions biglake_table/backing_file.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# This file has some scaffolding to make sure that names are unique and that
# a region and zone are selected when you try to create your Terraform resources.

locals {
name_suffix = "${random_pet.suffix.id}"
}

resource "random_pet" "suffix" {
length = 2
}

provider "google" {
region = "us-central1"
zone = "us-central1-c"
}
61 changes: 61 additions & 0 deletions biglake_table/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
resource "google_biglake_catalog" "catalog" {
name = "my_catalog-${local.name_suffix}"
location = "US"
}

resource "google_storage_bucket" "bucket" {
name = "my_bucket-${local.name_suffix}"
location = "US"
force_destroy = true
uniform_bucket_level_access = true
}

resource "google_storage_bucket_object" "metadata_folder" {
name = "metadata/"
content = " "
bucket = google_storage_bucket.bucket.name
}


resource "google_storage_bucket_object" "data_folder" {
name = "data/"
content = " "
bucket = google_storage_bucket.bucket.name
}

resource "google_biglake_database" "database" {
name = "my_database-${local.name_suffix}"
catalog = google_biglake_catalog.catalog.id
type = "HIVE"
hive_options {
location_uri = "gs://${google_storage_bucket.bucket.name}/${google_storage_bucket_object.metadata_folder.name}"
parameters = {
"owner" = "Alex"
}
}
}

resource "google_biglake_table" "table" {
name = "my_table-${local.name_suffix}"
database = google_biglake_database.database.id
type = "HIVE"
hive_options {
table_type = "MANAGED_TABLE"
storage_descriptor {
location_uri = "gs://${google_storage_bucket.bucket.name}/${google_storage_bucket_object.data_folder.name}"
input_format = "org.apache.hadoop.mapred.SequenceFileInputFormat"
output_format = "org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat"
}
# Some Example Parameters.
parameters = {
"spark.sql.create.version" = "3.1.3"
"spark.sql.sources.schema.numParts" = "1"
"transient_lastDdlTime" = "1680894197"
"spark.sql.partitionProvider" = "catalog"
"owner" = "John Doe"
"spark.sql.sources.schema.part.0"= "{\"type\":\"struct\",\"fields\":[{\"name\":\"id\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"name\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"age\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}}]}"
"spark.sql.sources.provider" = "iceberg"
"provider" = "iceberg"
}
}
}
7 changes: 7 additions & 0 deletions biglake_table/motd
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
===

These examples use real resources that will be billed to the
Google Cloud Platform project you use - so make sure that you
run "terraform destroy" before quitting!

===
79 changes: 79 additions & 0 deletions biglake_table/tutorial.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Biglake Table - Terraform

## Setup

<walkthrough-author name="[email protected]" analyticsId="UA-125550242-1" tutorialName="biglake_table" repositoryUrl="https://github.com/terraform-google-modules/docs-examples"></walkthrough-author>

Welcome to Terraform in Google Cloud Shell! We need you to let us know what project you'd like to use with Terraform.

<walkthrough-project-billing-setup></walkthrough-project-billing-setup>

Terraform provisions real GCP resources, so anything you create in this session will be billed against this project.

## Terraforming!

Let's use {{project-id}} with Terraform! Click the Cloud Shell icon below to copy the command
to your shell, and then run it from the shell by pressing Enter/Return. Terraform will pick up
the project name from the environment variable.

```bash
export GOOGLE_CLOUD_PROJECT={{project-id}}
```

After that, let's get Terraform started. Run the following to pull in the providers.

```bash
terraform init
```

With the providers downloaded and a project set, you're ready to use Terraform. Go ahead!

```bash
terraform apply
```

Terraform will show you what it plans to do, and prompt you to accept. Type "yes" to accept the plan.

```bash
yes
```


## Post-Apply

### Editing your config

Now you've provisioned your resources in GCP! If you run a "plan", you should see no changes needed.

```bash
terraform plan
```

So let's make a change! Try editing a number, or appending a value to the name in the editor. Then,
run a 'plan' again.

```bash
terraform plan
```

Afterwards you can run an apply, which implicitly does a plan and shows you the intended changes
at the 'yes' prompt.

```bash
terraform apply
```

```bash
yes
```

## Cleanup

Run the following to remove the resources Terraform provisioned:

```bash
terraform destroy
```
```bash
yes
```
2 changes: 1 addition & 1 deletion cloudrunv2_job_secret/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ data "google_project" "project" {
resource "google_secret_manager_secret" "secret" {
secret_id = "secret-${local.name_suffix}"
replication {
automatic = true
auto {}
}
}

Expand Down
2 changes: 1 addition & 1 deletion cloudrunv2_job_sql/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ data "google_project" "project" {
resource "google_secret_manager_secret" "secret" {
secret_id = "secret-${local.name_suffix}"
replication {
automatic = true
auto {}
}
}

Expand Down
2 changes: 1 addition & 1 deletion cloudrunv2_service_secret/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ data "google_project" "project" {
resource "google_secret_manager_secret" "secret" {
secret_id = "secret-1-${local.name_suffix}"
replication {
automatic = true
auto {}
}
}

Expand Down
2 changes: 1 addition & 1 deletion cloudrunv2_service_sql/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ data "google_project" "project" {
resource "google_secret_manager_secret" "secret" {
secret_id = "secret-1-${local.name_suffix}"
replication {
automatic = true
auto {}
}
}

Expand Down
3 changes: 3 additions & 0 deletions container_attached_cluster_full/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,7 @@ resource "google_container_attached_cluster" "primary" {
enabled = true
}
}
binary_authorization {
evaluation_mode = "PROJECT_SINGLETON_POLICY_ENFORCE"
}
}
15 changes: 15 additions & 0 deletions data_pipeline_pipeline/backing_file.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# This file has some scaffolding to make sure that names are unique and that
# a region and zone are selected when you try to create your Terraform resources.

locals {
name_suffix = "${random_pet.suffix.id}"
}

resource "random_pet" "suffix" {
length = 2
}

provider "google" {
region = "us-central1"
zone = "us-central1-c"
}
48 changes: 48 additions & 0 deletions data_pipeline_pipeline/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
resource "google_service_account" "service_account" {
account_id = "my-account-${local.name_suffix}"
display_name = "Service Account"
}

resource "google_data_pipeline_pipeline" "primary" {
name = "my-pipeline-${local.name_suffix}"
display_name = "my-pipeline"
type = "PIPELINE_TYPE_BATCH"
state = "STATE_ACTIVE"
region = "us-central1"

workload {
dataflow_launch_template_request {
project_id = "my-project"
gcs_path = "gs://my-bucket/path"
launch_parameters {
job_name = "my-job"
parameters = {
"name" : "wrench"
}
environment {
num_workers = 5
max_workers = 5
zone = "us-centra1-a"
service_account_email = google_service_account.service_account.email
network = "default"
temp_location = "gs://my-bucket/tmp_dir"
bypass_temp_dir_validation = false
machine_type = "E2"
additional_user_labels = {
"context" : "test"
}
worker_region = "us-central1"
worker_zone = "us-central1-a"

enable_streaming_engine = "false"
}
update = false
transform_name_mapping = { "name" : "wrench" }
}
location = "us-central1"
}
}
schedule_info {
schedule = "* */2 * * *"
}
}
7 changes: 7 additions & 0 deletions data_pipeline_pipeline/motd
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
===

These examples use real resources that will be billed to the
Google Cloud Platform project you use - so make sure that you
run "terraform destroy" before quitting!

===
79 changes: 79 additions & 0 deletions data_pipeline_pipeline/tutorial.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Data Pipeline Pipeline - Terraform

## Setup

<walkthrough-author name="[email protected]" analyticsId="UA-125550242-1" tutorialName="data_pipeline_pipeline" repositoryUrl="https://github.com/terraform-google-modules/docs-examples"></walkthrough-author>

Welcome to Terraform in Google Cloud Shell! We need you to let us know what project you'd like to use with Terraform.

<walkthrough-project-billing-setup></walkthrough-project-billing-setup>

Terraform provisions real GCP resources, so anything you create in this session will be billed against this project.

## Terraforming!

Let's use {{project-id}} with Terraform! Click the Cloud Shell icon below to copy the command
to your shell, and then run it from the shell by pressing Enter/Return. Terraform will pick up
the project name from the environment variable.

```bash
export GOOGLE_CLOUD_PROJECT={{project-id}}
```

After that, let's get Terraform started. Run the following to pull in the providers.

```bash
terraform init
```

With the providers downloaded and a project set, you're ready to use Terraform. Go ahead!

```bash
terraform apply
```

Terraform will show you what it plans to do, and prompt you to accept. Type "yes" to accept the plan.

```bash
yes
```


## Post-Apply

### Editing your config

Now you've provisioned your resources in GCP! If you run a "plan", you should see no changes needed.

```bash
terraform plan
```

So let's make a change! Try editing a number, or appending a value to the name in the editor. Then,
run a 'plan' again.

```bash
terraform plan
```

Afterwards you can run an apply, which implicitly does a plan and shows you the intended changes
at the 'yes' prompt.

```bash
terraform apply
```

```bash
yes
```

## Cleanup

Run the following to remove the resources Terraform provisioned:

```bash
terraform destroy
```
```bash
yes
```
2 changes: 1 addition & 1 deletion dataform_repository/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ resource "google_secret_manager_secret" "secret" {
secret_id = "secret"

replication {
automatic = true
auto {}
}
}

Expand Down
2 changes: 1 addition & 1 deletion dataform_repository_release_config/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ resource "google_secret_manager_secret" "secret" {
secret_id = "my_secret-${local.name_suffix}"

replication {
automatic = true
auto {}
}
}

Expand Down
2 changes: 1 addition & 1 deletion dataform_repository_workflow_config/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ resource "google_secret_manager_secret" "secret" {
secret_id = "my_secret-${local.name_suffix}"

replication {
automatic = true
auto {}
}
}

Expand Down
Loading

0 comments on commit fcbafa6

Please sign in to comment.